id
stringlengths
24
24
idx
int64
0
402
paragraph
stringlengths
106
17.2k
65c0b8cbe9ebbb4db9b50bfa
44
Exact propagation has been carried out with the Chebychev scheme for which the evolution operator is expanded on Chebychev polynomials 60 with a time step 0.1 fs. The Hamiltonian has been normalized in order ensure its spectral range lies in the interval [-1, 1]. The grid size and the number of grid points have been carefully optimized for each Tully model depending on the value of the initial momentum k 0 , in order to avoid reflections at the boundaries and to accurately capture the fine features of the nuclear wavepacket everywhere in space, even in very delocalized situations. The computational parameters for each calculation are listed in Table . ElVibRot provides as output the nuclear wavepackets in the diabatic basis, as well as their spatial and time derivatives. This information is required by EFAC to reconstruct the TDPES (7) in the gauge where the TDVP is zero.
65c0b8cbe9ebbb4db9b50bfa
45
In the remainder of this section, we show results obtained using the TDQT method, where the classical force is obtained as (minus) the gradient of the numerical TDPES. In addition to benchmarking the performance of TDQT against quantum wavepacket propagations, we also compare TDQT against a time-dependent classical trajectory (TDCT) approach, using only the classical force in the integration procedure. For each calculation, 5000 TDCT classical trajectories have been propagated using the same initial condition parameters used for the exact vibronic calculations, sampling the positions and momenta according to the Wigner distribution. Concerning the TDQT simulations, 400 trajectories were used in all cases presented in this section, with an ODE error tolerance of 10 -9 . As quantum trajectory propagation is performed using an adaptive time step method, it was necessary to access the TDPES and associated forces at arbitrary times. A time interpolation procedure was therefore introduced, assuming linear evolution of the TDPES between two adjacent Chebychev time steps. Similarly, a linear spatial interpolation was performed to obtain the TDPES-derived force at arbitrary positions between grid points.
65c0b8cbe9ebbb4db9b50bfa
46
Previous attempts have been made in this direction by some of the authors; 23,24 however, the outcome of those studies were not extremely promising for general applications, as already discussed in the Introduction. In contrast-and despite some small residual errors related to numerics, that will be documented below-the nonadiabatic dynamics produced by TDQT, and presented here as illustration, appears to be a promising route for further investigation.
65c0b8cbe9ebbb4db9b50bfa
47
We analyze in detail below the three models using k 0 = 10 a -1 0 . An overall assessment of the performance of TDQT for k 0 = 10, 15, 20 a -1 0 is provided at the end of the section. In Fig. we present some results for the single avoided crossing model where the initial momentum of the incoming wavepacket is k 0 = 10 a -1 0 . Two snapshots are shown in the figure, at t = 40 fs (left panels) and at t = 120 fs (right panels): in the upper panels the (static) adiabatic, BOPESs are reported in black, together with the TDPES in gold at the two snapshots; in the lower panels the exact nuclear density is shown in black, and is compared to the density reconstructed from TDQT (red) and from TDCT (blue). The evolution of the TDPES manifests the nonadiabatic event by forming a pronounced peak between 0 and 2 a 0 that deforms the Gaussian shape of the nuclear density at t = 40 fs, and also by developing a small bump between 10 a 0 and 15 a 0 that splits the nuclear density into two portions.
65c0b8cbe9ebbb4db9b50bfa
48
In particular, the right most portion of the nuclear density is associated with the lower electronic states, whereas the left portion is what has been "transferred" to the upper state at the avoided crossing. For this simple model, TDQT is in extremely good agreement with the benchmark calculations, even though the TDPES is known only numerically on a spatial grid. For instance, evaluating the integrated density beyond x = 12 a 0 at t = 120 fs, one gets a transmitted probability value of 0.84625 using TDQT, which is very close to the exact value of 0.84518 (TDCT yields 0.88584). The dual avoided crossing model, with initial wavepacket launched towards the avoided crossings with initial momentum of k 0 = 10 a -1 0 , presents very quantum-mechanical behavior.
65c0b8cbe9ebbb4db9b50bfa
49
Specifically, after the first crossing, where the nuclear density branches into a ground-state and an excited-state portion, the nuclear wavepackets meet again at the second avoided crossing and interfere. As shown in Fig. , at short time, TDQT is capable of reproducing the recombination of the two portions of the nuclear wavepacket (t = 40 fs). Furthermore, TDQT captures quite well the portion of density that remains localized in the lower-state well at around 0, even though the fine details manifesting interferences are missed. On the other hand, TDCT nearly completely misses this part of the density, and already at t = 40 fs we observe some deviations from the reference. In general, we observed that for this model the numerical TDPES is quite noisy, which is the most likely reason for deviations of TDQT results from the quantum density at t = 120 fs. Figure shows numerical results for the Tully model with extended coupling region and reflection, using k 0 = 10 a -1 0 as the initial momentum. When the nuclear wavepacket travelling in the lower state passes through the coupling region, it transfers population to the upper state, before reaching the branching portions of the BOPESs. Afterwards, for such a low-energy, the lower-state wavepacket is transmitted towards positive values of x, and decoheres from the upper-state wavepacket. Furthermore, the latter is reflected and crosses the coupling region again: the oscillations appearing in the TDPES and in the nuclear density at t = 35 fs attest to the recoherence of this portion of the wavepacket. These subtle quantum effects are all captured well by TDQT, which is in extremely good agreement with the exact benchmark results, whereas TDCT misses completely the quantum oscillations. Later in time, at t = 60 fs, we observe that the TDQT results deviate from the benchmark (as does TDCT), which is probably due to the fact that the TDPES shows numerical instabilities before developing smooth behavior, as evident at t = 60 fs. Pseudo-nodes become severe and cannot be resolved with the number of trajectories employed. The distribution becomes noisy, but still maintains qualitative agreement with exact results.
65c0b8cbe9ebbb4db9b50bfa
50
It is worth noting that when using exact factorization and the TDPES, it is not possible to recover information related to individual electronic states-neither in the adiabatic nor in the diabatic basis representations. A single PES produces the evolution of the nuclear wavepacket, which is not resolved according to its adiabatic components. Therefore, information about the occupation of the electronic states is not accessible. However, in order to present a more in-depth analysis of the models and to circumvent this feature of exact factorization, we show in Fig. the transmission probability through a dividing surface placed at x d = 2 a 0 , for the third model discussed in this section. This observable provides indirect information about the occupation of the electronic states, because the transmitted density, at the low initial momenta used here, is only found in the lower electronic state. The results shown in Fig. attest to the importance of nuclear quantum effects to recover the correct dynamics, as TDCT (short-dashed lines) does not reproduce exact results (continuous lines). On the other hand, TDQT (long-dashed lines) reproduces very well the benchmark calculations, despite the numerical instabilities described above.
65c0b8cbe9ebbb4db9b50bfa
51
Table : Time rescaling t f in fs for each calculation presented on figure . Continuous lines represent TDQT results, and in all cases along the dynamics they remain smaller than the dotted lines, representing TDCT results. The figure attests to the improvement in numerical accuracy that TDQT can provide over TDCT, even insofar as reproducing wavepacket dynamics is concerned. It should also be remembered that the TDCT calculations were performed using more than one order of magnitude more trajectories than the TDQT calculations.
65c0b8cbe9ebbb4db9b50bfa
52
The TDQT approach presented in this work for propagating quantum trajectories in both adiabatic and nonadiabatic conditions shines by its very simplicity. In particular, no fitting procedure is needed to recover the quantum force (e.g., via moving weighted least squares), nor is any modification of the trajectory ensemble itself required during the ensemble propagation (as would be necessary, e.g., in the ALE approach). These and other complicated refinements of conventional QTM (as described in the Introduction) can provide modest benefit, but often only at the expense of substantial and highly problem-dependent "parameter-tweaking". At heart, these are all means of addressing the inherent limitations associated with the node problem-and especially, with numerical errors caused by the use of moving, unstructured grids in x space.
65c0b8cbe9ebbb4db9b50bfa
53
In contrast, the simplicity and accuracy of the TDQT approach stems from its use of fixed, structured grids in C space, that never change over time. This fortuitous state of affairs is ultimately due to the trajectory-based reformulation, and its replacement of x with C as the requisite "spatial" coordinate-a change also allows for much more natural comparison and integration with classical theories. As a result, the node problem does not lead to a breakdown of the simulation, quasi-nodes inducing a much milder under-sampling problem in our case, rather than a fatal numerical blow up of errors as in standard QTM approaches.
65c0b8cbe9ebbb4db9b50bfa
54
Differentiation errors are greatly reduced and can be evaluated without much fanfare. While this promise was recognized early on in the development of the TDQT theory, other numerical issues have prevented the approach from reaching its fullest potential. Until now, that is-at least according to what the present results seem to suggest.
65c0b8cbe9ebbb4db9b50bfa
55
In keeping with our theme of simplicity, integration of the TDQT approach with exact factorization, as a means of addressing the nonadiabatic regime, also appears to provide the "perfect marriage" of methodologies. In particular, the collection of multiple BOPES surfaces (and their couplings) that characterize the standard approach, is replaced with just a single TDPES (ignoring the TDVP for now)-that is treated in TDQT in exactly the same manner as a single-PES adiabatic calculation. This is key, because it appears to be extremely difficult to extend TDQT theory to multiple components. Conversely, within the exact factorization framework, TDQT appears to offer a much cleaner and more effective quantum trajectory methodology than do conventional wavefunction-based QTMs. In particular, the behavior of the TDQT trajectory ensemble evolution-even when computed from a numerically-determined, and thus noisy, TDPES-demonstrates its stability without introducing ad hoc smoothing procedures (as for example, in the form of viscosity forces also used by more traditional QTMs).
65c0b8cbe9ebbb4db9b50bfa
56
The interacting trajectory-based reformulation, and ensuing TDQT methodology, hence allow one to recover key nuclear quantum effects necessary to describe the correct quantum dynamics of both adiabatic and nonadiabatic processes. In future, we envisage several clear developments of TDQT, to extend calculations to higher dimensions (for which the theoretical equations have already been derived), and to combine it with TDCT (which is straightforward using trajectory-Lagrangian-based action extremization). Treatment of nonadiabatic processes with several nuclear degrees of freedom will only allow for setting the TDVP to zero by gauge choice along a single dimension, but this does not undermine in any way the relevance of what is presented here as quantum trajectories are gauge-invariant. In addition, the possibility of combining the TDQT approach with CT-MQC, the coupled-trajectory mixed quantum-classical algorithm derived from the exact factorization, is currently being explored.
633dbc93fee74e8fdd56e15f
0
Metal/organic interfaces are key to several fields including electronics, protective coatings and particularly in heterogeneous catalysis. Adsorption of organic species on metallic surfaces can be evaluated via density functional theory (DFT); this approach has been successfully applied to molecules containing 1-6 carbon atoms (C 1-6 ). However, DFT simulations present issues when dealing with: (i) large molecules with non-rigid bonds; (ii) amorphous, partially disordered and/or polymeric structures; (iii) several potential configurations or conformations with different bond patterns. Specifically, molecules with different configurations require a high number of DFT evaluations, while large molecules and amorphous materials heavily influence the computational time due to the size of their minimum representation. Therefore, faster tools are needed to estimate the interaction of molecules derived for instance from plastics and biomass, keeping the accuracy of DFT. The structural information of a set of atoms in an organic compound can be used to infer the molecular thermodynamic properties through Benson's equation, defined as:
633dbc93fee74e8fdd56e15f
1
where the thermodynamic property T m of a molecule containing N groups is obtained as the sum of each group contribution T i plus a constant c m associated to the molecular properties. Despite its simplicity, Benson's equation has an impressive accuracy for the thermodynamics of small gas-phase molecules such as hydrocarbons, alcohols, and ethers, with errors lower than 0.05 eV for the formation energy. The description of radicals and strained rings requires additional corrections. However, attempts to transfer the Benson model to adsorption on metals have failed. This happens because when molecules interact with surfaces, some internal bonds weaken and the corresponding density is responsible for the new bonds with the surface. This is known as the bond order conservation theory and was employed in early adsorption schemes. However, models derived from explicit DFT geometries and bond counting techniques have obtained limited success. More recently, machine learning approaches have been explored, for instance artificial neural networks (ANN) have been employed to obtain the adsorption energy of small C 1-3 fragments. An alternative representation of molecules on surfaces is through graphs.
633dbc93fee74e8fdd56e15f
2
The graph compresses the information of the atoms and the connectivity in a simple data structure, in analog way to Benson's approach. Graph neural networks (GNN), which are ANNs for the graph data structure type, have been successfully applied to predict chemical properties of molecules and materials. For gas-phase molecules, GNNs have been able to predict molecular properties and their solvation energy with exceptional performance. Extending to materials chemistry, specific convolutional graph layers can describe periodicity and predict the structure of metals and crystals. For the adsorption of molecules on metals, GNNs have been employed to estimate the DFT binding energy of small species as CO 2 , O, in the open catalyst dataset (OC20 and OC22). Moreover, GNNs have been employed to assess lat-eral adsorbate-adsorbate interactions. Taking eight C 1 and C 2 fragments (C, H, O) on metals with different adsorbate-surface connectivity (1422 data points), Xu et al. coupled a graph kernel to a gaussian process regressor (GPR) and obtained a reasonable performance for small molecules (up to 9
633dbc93fee74e8fdd56e15f
3
Here, we present a new GNN architecture trained with an extensive DFT dataset consisting of closed-shell organic molecules (containing ≈ 2900 entries and all common functional groups) adsorbed on transition metal surfaces, able to predict the adsorption energy of closed-shell molecules with an error comparable to DFT, using a simple molecular representation. The GNN is capable of predicting the adsorption energy of larger molecules derived from biomass, polyurethanes and plastics, allowing to study chemical systems not amenable to DFT.
633dbc93fee74e8fdd56e15f
4
Our goal is to obtain the DFT ground state energy of an adsorbed arbitrary molecule, employing the simplest graph representation. To this end, we followed the procedure illustrated in Figure . The workflow consists of the following steps: (1) generation and curation of the "functional groups" FGdataset, consisting of organic molecules with representative functional groups adsorbed on close-packed metal surfaces to sample the common chemical To build the graph to be employed as input of the GNN, we started from 3D-atomic coordinates. For gas phase molecules, each atom is treated as a node storing its nature (element), while the bonds are taken as the edges.
633dbc93fee74e8fdd56e15f
5
When adsorbed on a surface, the molecule changes its interatomic distances and binds to the metal, thus the corresponding graph needs to incorporate metal-adsorbate (M-A) bonds. We developed an algorithm to convert the adsorption structures into their corresponding graph representation. A set of filters is applied to the raw graph FG-dataset in order to avoid the presence of spurious representations during the training process(Figure ).
633dbc93fee74e8fdd56e15f
6
Detailed information regarding the conversion and curation procedure can be found in Note S4 and Figure . To train the GNN, the generated graphs need to be paired with their DFT energy, E DFT . In this way the energy of the total system E A/M (adsorbate on metal) is targeted, but this would imply accounting for the full metal graph that is computationally heavy and complex to handle. Thus, we followed a ∆-ML approach; our final target will be the adsorption energy of the organic fragment obtained as follows:
633dbc93fee74e8fdd56e15f
7
The expanded molecular graph can then be input into a GNN. GNNs are a type of ANN able to handle variable-size non-euclidean data, represented as graphs. To accomplish this, our GNN model is built by assembling three against the E {i,DFT} values of the test sets of 0.17 eV. Considering that the consensus error of DFT in adsorption systems is about 0.20 eV and that our values can be both above or below the 1:1 line, we conclude that the error of the method is similar to that of DFT itself. However, once trained with a sufficiently large and diverse dataset as that in FG-dataset, the true advantage is the fast estimation of the DFT energy, that takes place in the order of seconds in a single CPU.
633dbc93fee74e8fdd56e15f
8
Figure shows the absolute error distribution grouped by chemical family. A similar standard deviation of 0.25 eV is found for each of the families (Table ). Lower errors are retrieved for C x H y O (2,3) (0.13 eV), amides (0.15 eV), oximes (0.19 eV) and carbamates (0.16 eV). Larger errors are associated to the aromatic compounds, with a standard deviation of 0.48 eV.
633dbc93fee74e8fdd56e15f
9
The source of this higher dispersion comes from conjugated rings, particularly those containing two rings. The standard deviation of aromatic compounds may be explained due to the difficulties of the model to capture the global structure of highly conjugated orbitals commonly found in aromatic rings and thus, additional work is needed to accurately represent these molecules.
633dbc93fee74e8fdd56e15f
10
Figure shows the mean of the MAE among the different models generated during the cross validation, grouped by family, and their associated standard error of the mean (SEM) (Table ). Values obtained for the SEM show that there is no significant variation in the prediction performance among the models. Figure and Table show the error distribution and the SEM grouped by metal surfaces.
633dbc93fee74e8fdd56e15f
11
The obtained GNN model generalizes the chemical patterns found within the FG-dataset, which presents functional groups that are the commonly found as building blocks of more complex chemical structures. In this context, we tested the GNN model, trained with the FG-dataset, on industrially relevant larger molecules that are not amenable to DFT. Before testing the GNN model with new samples, we performed an hyperparameter optimization study, see Note S8 and Table . This resulted into a simpler model Overall, the GNN demonstrates its robustness when explaining the chemical process of adsorption. Since functional groups in the dataset present a wide variety, the model generalizes to bigger compounds, paving the way for a fast tool to evaluate adsorption. The FG-dataset generation requires 10 5 s but these are small calculations on relatively small unit cells, thus they can be run in modest computational facilities. The large molecules of industrial interest still would require massive computational resources due to the large unit cells required and sequential DFT evaluations in powerful machines.
633dbc93fee74e8fdd56e15f
12
The time to generate the FG-dataset and train the GNN is 10 5 s and the evaluation of the larger molecules is 10 -2 s. Thus, for a single comparative DFT vs. GNN evaluation, the gain is several orders of magnitude (likely 6-10 depending on the size of the big molecule) while the grey hidden cost of training the network is 10 5 s. The positive part is that once trained, the GNN structure can be widely applied to other compounds.
633dbc93fee74e8fdd56e15f
13
for their adsorption on closed-packed metal surfaces. The data is employed to train a proposed graph neural network architecture. The 5-fold cross validation revealed a mean absolute error of 0.17 eV when applied to the FG-dataset. Once trained, the human time required to obtain the energy estimation from GNN is at least six orders of magnitude lower than DFT.
633dbc93fee74e8fdd56e15f
14
Application of the GNN to larger molecules related to biomass conversion, polyurethane synthesis and plastic chemical recycling shows the potential of statistical learning methods to reach areas that could not be easily addressed by standard first principles techniques. Our work paves the way for building graph-based frameworks capable to learn complex chemical patterns from high-quality datasets composed by small molecules.
633dbc93fee74e8fdd56e15f
15
jector Augmented Wave pseudopotentials and valence electrons were represented by plane waves with a kinetic energy cutoff of 450 eV. The calculated lattice parameters for the metals show good agreement with experimental values. Metal surfaces were modeled by four-layer slabs, where the two uppermost layers were fully relaxed and the bottom ones were fixed to the bulk distances. We selected the (111) surface for the fcc metals and the (0001) for the hcp ones. Samples of the FG-dataset have been generated
633dbc93fee74e8fdd56e15f
16
from, and respectively. The vacuum between the slabs was set larger than 13 Å and the dipole correction was applied in z direction. The Brillouin zone was sampled by a Γ-centered 3 × 3 × 1 k-points mesh generated through the Monkhorst-Pack. The gas-phase molecules were relaxed in a cubic box with 20 Å sides. Electronic convergence was set to 1 × 10 -5 eV and atomic positions were converged until residual forces fell below 0.03 eV Å -1 .
66ba243501103d79c5bdfed7
0
Glycans are comprised of individual monosaccharides that are capable of modifying diverse biomolecules such as lipids and proteins. Additionally, glycans can exist as unconjugated free polysaccharides. Glycosylation processes in nature are comprised of a wide range of monosaccharides and their derivatives, numbering in at least the hundreds . There are a variety of methods that have been employed to detect and measure glycans, however mass spectrometry (MS) has become the method of choice for rapid characterisation of sample glycosylation . The ability to separate glycans by mass for mass-based detection provides a relatively unbiased approach to detecting both the usual and unusual glycans in nature.
66ba243501103d79c5bdfed7
1
One such approach is combinatorial monosaccharide analysis and is most popularly used in the form of GlycoMod . GlycoMod is a Perl program designed to find all possible combinations of a glycan from an experimentally determined mass by searching a precomputed list of masses . Despite being published in 2001, it remains frequently in use but is limited by its closed source nature, and online-only access. Combinatorial mass analysis is popular in the field of metabolomics, which uses high resolution accurate mass to assign chemical formulae , and rules such as the "Seven Golden Rules" restrict the number of theoretical matches based on experimental observations . MS-based glycomics analyses have changed significantly as the monosaccharides and glycan structures observed experimentally challenge existing paradigms. This is compounded by improvements in data acquisition and bioinformatics that have allowed the glycoscience field to detect, quantify and share the data, describing more glycans than ever before .Considerable advancements have been made in the glycan bioinformatic search space including GlycReSoft , and GlycoNote , but throughput is lacking, and the monosaccharide search spaces are limited to routine and expected N-glycan structures.
66ba243501103d79c5bdfed7
2
To address these challenges and complement the advantages of GlycoMod, we describe an open-source, offline program called GlyCombo which combinatorially assigns monosaccharide combinations to glycan masses acquired by MS. Taking advantage of collaborative standards initiatives, we employ the mzML file format to enable accessible, and complete glycan composition assignments in an automated manner while also delivering fast and efficient combinatorial analysis using dynamic programming .
66ba243501103d79c5bdfed7
3
Raw files were downloaded from published datasets on GlycoPost . Datasets were chosen to cover negative and positive MS polarities, glycan types (NG, OG and GSL), resolutions (2D and 3D ion traps, and Orbitrap), and vendors (Bruker and Thermo Fisher Scientific). All benchmarks were performed on a consumer-grade Lenovo laptop equipped with an AMD Ryzen 6 Pro 5850U CPU, 16 GB of RAM and a 500 GB SSD. Search times for GlycoMod were based on the loading timeline in Firefox network developer tools.
66ba243501103d79c5bdfed7
4
All downloaded files were converted to mzML file format 20 by MSConvert . For lowresolution ion trap data with unassigned precursor charge states, an additional filter of "Charge State Predictor" was used without overwriting existing charges, single charge % TIC = 0.9, and only 1 charge per precursor. The mzML files were then directly read by GlyCombo.
66ba243501103d79c5bdfed7
5
GlyCombo was developed in C# and is available open source under the Apache 2.0 license (). The first process, mass list generation, extracts precursor m/z values from MS2 scans along with charge state and polarity to calculate neutral precursor mass values. This list is exported in the parameters file at the end of a given search in GlyCombo and was used for GlycoMod comparisons.
66ba243501103d79c5bdfed7
6
Neutral precursor mass values were adjusted for combinatorial analysis based on userspecified mass error, reducing end formats, and glycan derivatisation status. High-resolution files were searched with 50 ppm mass error, and low-resolution files were searched with 0.6 Da mass error. Based on presets or user-specified monosaccharide ranges, limits were then applied to the combinatorial analysis of monosaccharide masses to identify matches for the starting neutral precursor mass list. The same monosaccharide ranges were applied to GlycoMod for direct comparisons. Dynamic programming was used to efficiently identify all possible matching monosaccharide combinations for each precursor mass.
66ba243501103d79c5bdfed7
7
Output skyline transition lists were directly imported into Skyline with minimum m/z 50 and maximum m/z 10,000. High resolution raw files were analysed with the following fullscan settings: 3 peaks for centroided mass analyser at 50 ppm mass accuracy. Low resolution raw files were analysed with the following full scan settings: 3 peaks for TOF mass analyser at 5000 resolution. Glycan precursors were filtered for quality, only including those with a minimum precursor isotopic dot product (idotp) of 0.9. The idotp value of 0.9 was empirically selected to remove poor quality MS1-matches (caused by monoisotopic peak misassignment, incorrect charge assignment, and poor signal to noise ratios) while preserving high-quality matches .
66ba243501103d79c5bdfed7
8
Combinatorial glycan composition determination has been demonstrated to be suitable for identifying glycans in MS acquisitions of glycan-containing samples. With a diverse range of sample preparation, acquisition methods, and vendors, profiling glycans by their precursor mass is a robust initial approach with minimal assumptions. To enhance the identification of glycan precursors in acquired MS data, we developed an open-source Windows application for the rapid extraction of precursor m/z values from the mzML file format, a vendor neutral file type that enables cross-platform compatibility. Many aspects of the glyco-analytical pipeline can affect the resulting m/z values observed in these mzML files, and as a result, GlyCombo is run through a GUI that requests user specifications regarding the mass error, reducing end format, glycan state, and expected monosaccharide ranges (Fig ). Unlike GlycoMod, we also provide hexosamine (i.e. isomerically ambiguous glucosamine) as a new monosaccharide for combinatorial analysis, enabling the analysis of biomedically relevant polysaccharides such as heparan sulfate . In addition to a large list of monosaccharide presets, including sialic acid derivatives, a maximum of five custom monosaccharides can be specified to ensure coverage of most applications.
66ba243501103d79c5bdfed7
9
Other aspects integral for correct data interpretation, such as MS polarity, charge state, and precursor m/z are automatically extracted from mzML files and used to build a neutral mass list that is searched, enabling direct comparisons to other platforms that require a glycan mass list as input (Fig ). A comprehensive feature comparison to GlycoMod is described in Supp Table . As brute-force combinatorial analysis lacks throughput for applications such as these (like the classic computer science coin change problem 29 ), dynamic programming was implemented as the solution to calculate monosaccharide combinations to match a given neutral mass. As the combinations are recursively expanded, memoization is performed to eliminate redundant calculations (simplified examples are provided in Supp Figs and).
66ba243501103d79c5bdfed7
10
GlyCombo works best with mzML input as it enables the output of three files. The first is a csv file which includes glycan compositions matches, mass error of the matches, and the scan number of the respective MS2 spectrum. The second is a similar csv file which has additional columns enabling direct import into tools which generate extracted ion chromatograms (EIC) such as Skyline (an example is provided in Supplementary Table ). The final file is a parameters file, giving the specific parameters used to generate the given output, as well as the list of precursors searched. In this work, we have used this list of precursors to benchmark our platform against one of the most popular approaches for glycan composition identification, GlycoMod. Users are also given the ability to use a text-based list of glycan masses within the GUI. In this case, the output is limited to a csv file which includes the glycan composition matches and mass error of the matches.
66ba243501103d79c5bdfed7
11
To benchmark our software, and ensure compatibility with existing approaches, six datasets were selected, and one raw file was downloaded from each of their respective GlycoPost accessions (Table ). To demonstrate broad applicability across multi-glycomics , these datasets cover a range of glycan types: N-glycans (NG), O-glycans (OG) and glycans released from glycosphingolipids (GSL). For suitability of different polarities and derivatisation states, we also included permethylated and native glycans acquired in positive and negative modes. The vendor-neutrality of our approach was also assessed with dataset generated on Bruker and Thermo Fisher Scientific instruments. GlyCombo was developed with three qualities in mind: broad accessibility, rapid output, and completeness of glycan detection. Benchmarking was performed by searching the same neutral mass list generated by GlyCombo between both GlycoMod and GlyCombo, using the same monosaccharide ranges, and search time was recorded in triplicate. As shown in Fig 3A , all datasets were compatible with our software and it generally performed faster than GlycoMod, with up to 3x faster search times. Despite this, dataset 3 was almost 2x slower with our software, causing these search speed discrepancies to be investigated. Unlike GlyCombo, GlycoMod (and other search tools such as GlycReSoft and GlycoNote ) utilise matching rules based on literature review and assumed biosynthetic pathways to limit the combinatorial burden. This is demonstrated by the GlycoMod N-glycan search function whereas the O-glycan search function does not have such rules beyond a mass limit of 5 kDa . A comparison of the number of glycan compositions identified based on the glycan masses observed in each dataset (Fig ) confirmed that the N-glycan search function yielded fewer matches, and when assessed with the O-glycan function, similar combinations were identified at longer search times (e.g. dataset 3 processing time increased from 180 to 550 seconds, 50% slower than GlyCombo). As multiple monosaccharide combinations can be matched to a single scan, a ratio greater than 1 is possible.
66ba243501103d79c5bdfed7
12
The rapid analysis speed of GlyCombo could enable exciting new directions which have been realised in other MS applications such as proteomics and metabolomics. At an average search time of 7.5 milliseconds per precursor, real-time instrument acquisition methods could be devised which have been demonstrated to improve analytical depth and real-time method optimisation . Additionally, as downloading raw files and mzML conversion took longer than the combinatorial search itself, repository-scale reanalysis could prove a promising alternative, enabling iterative reanalyses for new insights . GlyCombo's ability to search for precursors with mass errors over 5 Da makes it ideal for data-independent glycan analysis, where precursors are isolated within windows up to 48 Da wide, potentially matching hundreds of glycan compositions to a single MS2 spectrum.
66ba243501103d79c5bdfed7
13
The efficiency of GlyCombo's combinatorial composition assignment algorithm enables computationally expensive features, such as multiple adduct searching and off-by-one error anticipation , which improve the scan annotation rate for mzML files. In Fig 4A, only when sodium is included as an adduct type is the correct (Hex)5 (HexNAc)2 composition assigned to the observed precursor mass. Off-by-one errors not only cause misassignments but also lead to unassigned precursors when the M+1 isotope is mistaken for the monoisotopic peak, even though both produce similar MS2 spectra due to co-isolation for fragmentation (Fig .
66ba243501103d79c5bdfed7
14
Figure Combinatorial glycan composition assignment is aided by including all observed charge states, adducts, and MS misassignments using Dataset 3 as an example. A Utilisation of all expected adducts improves accuracy of glycan composition assignment. B The off-by-one error caused by MS misassignment of the monoisotopic peak can impact accurate-mass precursor searching. C MS2 scan assignment rate across GlyCombo features including multiply charged adducts, inclusion of more than one adduct type, and allowing for off-by-one monoisotopic mass assignment errors.
66ba243501103d79c5bdfed7
15
Re-searching Dataset 3 with both off-by-one error anticipation, and sodium adduction enabled in GlyCombo reduced the number of unassigned MS2 scans by 40% compared to a protonated adduct search (Fig ). Although this complicated search comes at a cost of a 4x greater search time, the total search time was a fraction of the 200-minute-long LC-MS run. As the off-by-one search only yielded an 11% decrease in unassigned MS2 scans, this option could be skipped for data generated by instruments with accurate monoisotopic peak detection . A limitation of these more complex searches is the increase in false positives due to mistaking protonated adducts for sodiated adducts and correct precursor masses for off-by-one errors. As GlyCombo is the initial step in filtering candidate precursor ions for quantitation and characterization, users can opt for a higher false positive rate to ensure a high true positive rate. Therefore, it can be crucial to detect a comprehensive range of glycans to leverage subsequent quality assessments, including retention time filtering (Skyline), MS2 scoring (GlycoWorkBench), and precursor isotopic distribution evaluation (Skyline).
66ba243501103d79c5bdfed7
16
Characterisation and quantitation of glycan signal are common steps in the glycoanalytical data analysis workflow, yielding information about the abundance and identity of the glycan signals detected. GlycoWorkbench is a characterisation-oriented software tool, useful for matching MS1 and MS2 signals to drawn structures, or those found in databases. Like GlycoMod, it remains widely used demonstrating its enduring value for composition and structure characterisation . Orthogonal to the characterisation approach of GlycoWorkbench, Skyline is a quantitation tool that uses predefined molecule transitions to generate EICs, measuring glycan signal in LC-MS data and providing scores based on signal relationships. A useful score used in this work is the isotopic dot product (idotp) which describes how closely the isotopic distribution of an observed precursor isotopic envelope matches the theoretical abundances of a given chemical formula .
66ba243501103d79c5bdfed7
17
Here, we used Skyline to remove poor quality combinatorial matches (idotp > 0.9 for the first three isotopes) and visualise the glycan profile of three datasets which feature Nglycans and glycans released from glycosphingolipids (Fig ). In all three datasets, qualitatively identical plots of detected glycans were observed when these EICs were compared to base peak chromatograms (BPC) of the original raw files, demonstrating that GlyCombo is effective at glycan detection across these diverse datasets. As idotp is based on the isotopic distribution of a chemical formula, it informatively scores composition assignments but is unsuitable at discriminating glycan structures due to the isotopic distribution of isomers being identical. Although our software extracts and rapidly searches mzML files, instrument acquisition schemes can incorrectly identify the monoisotopic peak (known as off-by-one errors ), leading to incorrectly calculated neutral glycan masses. This is also exacerbated by our requirement of an MS2 scan for a given glycan composition to be detected. The use of glycan precursor isotopic distribution filtering ensures that these effects are mitigated, due to scoring the abundance of the first three isotopes of each glycan and subsequent removal of glycan compositions returning poor scores.
66ba243501103d79c5bdfed7
18
Another notable limitation of precursor mass-based combinatorial approach is the inability to assess glycan topology or glycosidic linkages. As glycans are frequently present in multiple isomeric forms or structures, and each isomer has differing biomarker potential, additional forms of separation beyond precursor mass are needed including liquid chromatography stationary phases (such as porous graphitised carbon and hydrophilic interaction columns benchmarked here), and diagnostic product ions from MSn spectra . GlyCombo output includes scan numbers to aid in subsequent downstream MS2 annotation and structure elucidation by software such as GlycoWorkBench and spectral library searching .
66ba243501103d79c5bdfed7
19
The identification of glycans in raw files serves as the first foundational step of glycomics data analysis. We describe a new open-source computational tool, GlyCombo, that is capable of rapidly elucidating possible glycan compositions from MS analyses. In this paper we utilize GlyCombo and benchmark its performance against the current state-of-the-art combinatorial solution for glycomics, GlycoMod. The experimental results exemplify the speed and robustness of GlyCombo for use with N-glycans, O-glycans, glycans released from glycosphingolipids, in combination with multiple polarities and derivatisation states. In addition to faster search times, broader accessibility, and completeness of annotation, processing is automated, and output are specifically formatted to connect with downstream processes including GlycoWorkBench for structural annotation, and Skyline for quality control and quantitation.
6232f6478ab373c4e667a1c3
0
Hydrogels are three-dimensional (3D) polymer networks that contain a large amount of water. Owing to their tunable physicochemical properties and, for some aspects, similarity to native extracellular matrices, hydrogels are widely used in biomedical applications for the preparation of 3D scaffolds in tissue regeneration, 2 drug-delivery vehicles, soft robots and microfluidic devices. For these applications, it is often important to create hydrogels with complex and detailed structures, especially in the case of tissue scaffolds and implants with customized geometries. In recent years, 3D printing has emerged as a powerful tool for hydrogel fabrication, with extrusion-based printing technologies most commonly used. With shear-thinning inks, extrusion printing allows the fabrication of objects mimicking natural structures with controlled geometry and characteristics. Nevertheless, the extrusion technique is limited by low printing resolution (> 200 µm) and poor structural fidelity. Photopolymerization-based 3D printing, in particular stereolithography (SLA) and digital light processing (DLP), offers high resolution (30-100 µm) and good surface quality of the printed objects. Compared to SLA, DLP allows the faster fabrication of objects with complex geometries using a digital micromirror device. However, as opposed to water-free networks, the printing of hydrogels by DLP is challenging. This is attributed to a variety of factors. First, as the most widely used vat photopolymerization technique, DLP requires a liquid resin of low viscosity (< 10 Pa.s) to enable layerby-layer stacking. Consequently, existing water-borne resins for DLP printing usually have high water content (typically > 70%), which results in low mechanical strength and toughness of the printed constructs. Secondly, the solidification of the DLP resin occurs through irradiation from the underneath of the resin, and the print platform moves vertically in order to allow the resin to flow back. The separation of every newly formed layer from the bottom plate subjects the structure to certain mechanical forces. Therefore, the crosslinked layers should have sufficient mechanical strength to remain intact during this process, and preserve the shape fidelity of the printed object. As a result, an ideal DLP resin should have low viscosity and provide high mechanical resistance of the print, which is highly challenging to achieve with common water-based resins, especially for high molecular weight (MW) macromonomers. Lastly, light scattering generated from the photopolymerized layer that is not optically clear can further affect the printing resolution and shape fidelity, with the poly(ethylene glycol) (PEG) hydrogels as the typical examples. As a well-known water-soluble synthetic polymer, PEG is widely used in drug delivery and regenerative medicine, because it is generally considered non-toxic, non-immunogenic and, in solution, easily clearable from the human body when its molecular weight is below the renal filtration cutoff. Photopolymerizable derivatives of PEG have been used for the 3D printing of hydrogels in various biomedical applications, with PEG diacrylate (PEGDA) and PEG dimethacrylate (PEGDMA) as the main representatives. The SLA/DLP printing of PEG hydrogels is usually limited to low MW monomers (e.g., PEGDA 700), which generate brittle products. Although it is possible to print highquality embedded channels or simple shapes using aqueous solutions of PEG (macro)monomers with higher MW (e.g., 6000 g mol -1 ), high water content and low crosslinking degree makes fabrication of free-standing PEG hydrogels with complex geometries very difficult. In addition to the aforementioned issues, the hydrogels prepared by "with-water" 3D printing inevitably further swell under physiological conditions as a result of the difference in osmotic pressure, which drastically deteriorates their mechanical properties. Therefore, 3D printing of structurally complex PEG hydrogels with high strength and toughness remains a challenging task.
6232f6478ab373c4e667a1c3
1
Recently, heat-assisted vat photopolymerization received increasing attention because it can improve the mechanical properties of 3D printed products and permit the 3D printing of highly viscous biodegradable photopolymers. Even shape-memory thermoplastic photopolymers that are solid at room temperature have been successfully 3D printed by this technique. To address the challenges in PEG hydrogel printing, we propose here direct heat-assisted SLA/DLP printing of hydrophilic photopolymers that are not printable at room temperature (e.g., high MW PEG), and the subsequent transformation into hydrogels by simple post-printing swelling. We report the high-resolution fabrication of structurally complex, tough PEG hydrogels, using either a commercial bisacylphosphine oxide (BAPO) photoinitiator or a newly developed PEG-based macrophotoinitiator (BAPO-PEG). The manufacturing of the hydrogel objects consists of two steps: i) direct DLP printing of PEG (macro)monomers in the absence of water and ii) post-printing swelling of the 3D printed objects in water (Figure ). By tuning PEG chain length and resin composition, shape-memory PEG networks and corresponding hydrogels with complex geometries, excellent shape fidelity, and tunable mechanical strength can be produced. Finally, using a dual-macromonomer formulation in the presence of star shape BAPO-PEG, the "all-PEG" resin can be 3D printed into bioactive bone-mimicking hydrogel scaffolds without including organic solvents or water in the resin.
6232f6478ab373c4e667a1c3
2
The functionalization degree of the resulting PEGDMAs 2k, 4k, 6k, 8k, 10k and 20k was determined to be 70-90% by 1 H NMR spectroscopy (Figure , Figure and Table ). To enable the waterfree DLP printing of "all-PEG" resins, we synthesized a star-shaped PEG-based macrophotoinitiator instead of using conventional hydrophobic (e.g., bisacylphosphine oxide, BAPO, Figure ), or watersoluble photoinitiators (e.g., lithium phenyl-2,4,6-trimethylbenzoyl phosphinate, LAP, Figure ). Bis(mesitoyl)phosphane (BAP-H) group was conjugated to a 4-arm PEG with MW of 10,000 g mol -1 , followed by oxidation of the BAP using a reported route for the synthesis of the linear conjugate (BAPO-PEG-LN, Figures 1d). The structure of the 4-arm macrophotoinitiator BAPO-PEG (Figure ) was confirmed by 1 H and 31 P NMR spectroscopies (Figure ). Under sonication at 70 °C, BAPO-PEG was easily mixed with the semicrystalline macromonomers (Figure ) and a UV absorber (Sudan I, 0.03 wt%) without the need for additional solvents. A customized DLP printer with temperature control of the resin tray and printing head (Figure ), allowed for direct printing of "all-PEG" resins at elevated temperature (e.g., 90 °C). Therefore, the performance of the 3D printed products could be evaluated without the interference of additives, especially the reactive diluents (e.g., N-vinylpyrrolidone (NVP)) that are often used in SLA/DLP to reduce viscosity and dissolve the commercial photoinitiator. To evaluate the 3D printability of the PEG macromonomers, we first measured the viscosity of the polymers between 70 and 100 °C. At 90 °C, the viscosity increased from 10 to 20,000 mPa×s with increasing the PEG MW from 700 to 20,000 g mol -1 (Figure ). The viscosity of PEGDMA 2k-10k was in the printable range of DLP at 70-100 °C, while it would be not possible to print PEGDMA 20k without the addition of diluents. We then printed the PEGDMA 2k-10k resins by heat-assisted DLP using BAPO-PEG as the photoinitiator. The commercial PEGDA 700 was selected as a control. As shown in Figure , all PEG macromonomers showed good printability. Slight deformation was observed due to the re-organization of the PEG chains during the evaporation of the solvent (acetone) used in the cleaning step.
6232f6478ab373c4e667a1c3
3
Next, we investigated the impact of PEG chain length on the mechanical strength of the printed specimens. As expected, the products printed from higher MW PEG showed greatly enhanced mechanical properties compared to those from lower MW ones. As shown in Figure , the Young's modulus of the 3D printed specimens increased from 14.8 to about 200 MPa with MW increasing from 2000 to 6000 g mol , and the maximum tensile stress reached more than 10 MPa from 0.98 MPa. A yielding point was observed with higher MW macromonomers (Figure ), indicating the plastic deformation related to the crystal phase of long PEG chains. The mechanical performance became steady upon further increase in the MW (>6000 g.mol -1 ) of PEG macromonomers. Note that the 3D printed products from PEGDA 700 were very brittle and weak, similar to that printed from PEGDMA 2k.
6232f6478ab373c4e667a1c3
4
We then compared the performance of the star-shape polymeric photoinitiator to the commercial BAPO photoinitiator, phenylbis(2,4,6-trimethylbenzoyl)phosphine oxide, which is one of the most efficient molecular photoinitiators in DLP/SLA 3D printing. In this case, we prepared the resins with BAPO dissolved in NVP (8.0 wt%). The 3D printed objects exhibited similar tensile properties to those prepared with the "all-PEG" formulations (Figures and), which is likely because the semicrystalline PEG domains have much higher impact on the mechanical performance than the crosslinking sites due to the low amounts of photoinitiators and reactive diluents. tensile stress with Young's modulus of 3D printed specimens from different PEG macromonomers using BAPO (1.0 wt%) and NVP (8.0 wt%). The mechanical properties are expressed as mean ± s.d. (n = 5-7); e) The ETH logo and f) stent prototype printed with PEGDMA 8k and BAPO-PEG reverting to its original shape upon heating to ∼80 °C. The 3D printed objects (at 0 s) were deformed at ∼80 °C and fixed in that shape by cooling to -20 °C.
6232f6478ab373c4e667a1c3
5
All resins showed excellent printability (Figure ), and we further used them to print cylindrical specimens for compression tests. The compression properties displayed similar trends in MW-modulus relationship compared to the tensile test results, and no substantial difference was observed between the BAPO-PEG and BAPO/NVP systems (Figure ). The compression modulus reached 80 MPa with the PEGDMA 6k, while the value for PEGDMA 2k was only about 10 MPa (Figure ). Similarly to the tensile test results, the 3D printed product from PEG 700 was brittle and much weaker than the high MW polymer networks.
6232f6478ab373c4e667a1c3
6
Given that the PEG macromonomers are semicrystalline with a melting point (Tm) in the range of 40-60 °C (Figure ), their 3D printed networks may offer shape-memory properties. To test this, we printed an ETH logo and a stent prototype from PEGDMA 6k with 2.0 wt% BAPO-PEG. As the 3D printed products have high flexibility above the Tm of PEG, they were easily deformed at ~80 °C and fixed in the temporary shape by cooling to -20 °C. When heated again above the Tm of PEG, the deformed ETH logo and stent prototype fully recovered to their original shapes in a few seconds (Figures and). Therefore, these PEG macromonomers hold a promise for the design of personalized shape-memory medical devices thanks to their excellent printability by heat-assisted DLP.
6232f6478ab373c4e667a1c3
7
The PEG hydrogels were prepared by incubating the 3D printed specimens in PBS pH 7.4 at 37 °C for 24 h to reach equilibrium swelling. Their mechanical properties were studied by performing tensile and compression tests. As shown in Figures , an important increase in elasticity was observed when raising the MW of PEG for resins prepared with BAPO-PEG and BAPO/NVP. The Young's modulus (Et) decreased from more than 30 MPa to less than 1 MPa while the elongation at break (εbreak,t) increased from about 5% to around 70%, when PEG MW was increased from 700 to 10,000 g.mol -1 . This is associated with the higher flexibility of long PEG chains as well as with the lower crosslinking density and thus higher water uptake of hydrogels (Figure ). Although the 3D printed hydrogels prepared with BAPO-PEG and BAPO/NVP showed similar Young's modulus for the same chain length of PEGDMA, the tensile strength of the objects printed with BAPO-PEG was higher in the MW range of 700 to 4000 g mol -1 . For example, the maximal tensile stress (σmax,t) of the PEGDMA 2k hydrogel prepared with BAPO-PEG was 1.7 MPa vs. 0.9 MPa for BAPO/NVP (Figures and). This is likely due to the star-shape structure of the polymeric photoinitiator with long PEG chain (2000 g.mol -1 per BAPO group) that can enhance the photocrosslinking and thereby strengthen the hydrogel networks despite the low concentration. When the polymer networks were more flexible (MW 6000 -8000 g mol -1 ), this effect became unnoticeable.
6232f6478ab373c4e667a1c3
8
However, for PEGDMA 10k, the BAPO-PEG hydrogels turned weaker than BAPO/NVP hydrogels (Figure ). This phenomenon was more remarkable in the compression tests (Figures and). BAPO/NVP hydrogels based on PEGDMA 10k showed much higher compressive toughness (Tc) than BAPO-PEG hydrogels using the same macromonomer (0.29 vs. 0.05 MJ•m -3 , Table ). This can be attributed to the relatively low methacrylation degree of PEGDMA 10k (71% vs. 80% -90% for others) and thus fewer crosslinking sites. In the case of BAPO/NVP hydrogels, the addition of NVP can promote the photopolymerization and strengthen the network. For the hydrogels based on BAPO-PEG (Figure ), the swelling ratio increased from 150% to about 500% when the PEG MW increased from 2000 to 10,000 g mol -1 . This was confirmed by the change in size of the swollen samples (Figures and). The swelling ratios of the 3D printed hydrogels using BAPO/NVP were in general similar to those of BAPO-PEG hydrogels. To compare our approach with molding or conventional DLP printing from PEG aqueous solutions, we prepared PEG hydrogels using a water soluble photoinitiator, lithium phenyl-2,4,6trimethylbenzoylphosphinate (LAP) bearing the monoacylphosphine oxide moiety (Figure ). By molding with PEGDMA 6k at 70% water content via photo-crosslinking, the equilibrium swelling ratio of the obtained sample was 570%, which was much higher than that via "water-free" 3D printing (~360%). As a result, the former hydrogel was very weak with Young's modulus of only 0.4 MPa, while the value for "water-free" 3D printed hydrogel was 1.2 MPa (Table , Figure ). We also printed PEGDMA 10k hydrogels with 80% water content by DLP at room temperature. To enhance the printability, NVP (8.0 wt%) was added to the resin. As expected, the swelling ratio of the 3D printed hydrogels was much higher than that obtained by "water-free" 3D printing (400-500%, Table ) and reached 1400% (Figure ), which means that the equilibrium water content of the hydrogel was as high as 93%. Clearly, the "water-free" approach may prevent the over-swelling of the hydrogels that can negatively impact the mechanical strength. This might be due to the much denser polymer networks formed by the direct crosslinking of the PEG macromonomers in the absence of large amounts of water molecules. In addition, the "water-free" 3D printing avoids the aforementioned light scattering and layer separation issues that are commonly encountered in SLA/DLP printed PEG hydrogels, resulting in low-resolution prints (Figure , inset).
6232f6478ab373c4e667a1c3
9
Although heat-assisted DLP allowed the fabrication of PEG hydrogels with much stronger mechanical properties compared to traditional molding or "with-water" 3D printing, their maximal stress remained quite low for the most of elastic ones based on high MW PEG (e.g., 0.6-0.7 MPa for PEGDMA 8k). To improve their mechanical properties by promoting hydrogen bonding, PEG 6000 functionalized with bidirectional carboxylic groups (PEGCOOH, Figure ) was added to the resin. The addition of 5% PEGCOOH increased the mechanical strength of the hydrogels by about 25%. However, the positive impact of hydrogen bonding vanished upon further increasing the amount of PEGCOOH (Figures and), likely because of the decrease in crosslinking density.
6232f6478ab373c4e667a1c3
10
Alternatively, we employed a "dual-macromonomer" strategy to combine the elasticity of high MW PEG and the strength of low MW PEG. A series of resins containing varying ratios of PEGDMA 10k and PEGDA 700 were formulated and printed into hydrogels after swelling in PBS (Figure ). As shown in Figures , PEGDA 700 greatly improved the mechanical strength of the 3D printed hydrogels, with maximal tensile stress up to ~1.5 MPa (Figure ). With 10 wt% of PEGDA 700, the Young's modulus of the hydrogel was about 2.4 MPa, while the elongation at break was ~50%. On the other hand, the compression modulus (Ec) reached 3.2 MPa (Figures and) and the compressive toughness was nearly 1.0 MJ•m -3 , which is 2-4 folds higher than that of single-PEGDMA hydrogels with relatively high elasticity made from PEGDMA 4k-8k (Figure and Table ). The swelling ratio was still high enough to keep the water content at ~75% (Figure ). As a result, the dualmacromonomer hydrogels were stronger and tougher than single-PEGDMA hydrogels (Figure ).
6232f6478ab373c4e667a1c3
11
), but also reduced the elongation at break to 40% (Figures and). At 30 wt%, the objects became brittle similar to the PEGDMA 2k hydrogel, due to the higher ratio of short network units, with molar fraction of PEGDA 700 exceeding 80%. To further confirm the improvement of the mechanical properties brought by bimodal PEG networks, we compared the Young's modulus-polymer content relationships of the 3D printed products from different compositions, because it is known that the Young's modulus of hydrogels is highly dependent on the polymer content after swelling. As shown in Figure , the Et of the dual-macromonomer hydrogels as a function of crosslinked polymer content showed a strong positive correlation, with a slope higher than that obtained for singlemacromonomer hydrogels (prepared with BAPO-PEG or BAPO/NVP), which confirmed the improvement of the mechanical properties created by the bimodal network structure.
6232f6478ab373c4e667a1c3
12
To further test the printability of various PEG formulations, we produced different complex structures by heat-assisted DLP (Figures and). The 3D printed objects with gyroid architecture or Dprime surface showed high resolution and fine surface finish. After incubation in PBS pH 7.4 at 37 °C for 24 h, all objects maintained their structures with even better surface smoothness due to hydration. Moreover, no shape deformation that commonly occurs in conventional hydrogel printing was observed (Figures ). As shown in Figure , an average layer thickness of the object printed from PEGDMA 10k -PEGDA 700 (90/10, w/w) was about 58 µm, which is very close to the set value (50 µm). The hydrogel layer thickness was of 85 µm, with a size expanding ratio of ~1.5. This is consistent with the results obtained in the swelling test of the cylinder specimens. If needed, the layer thickness could be set to lower values (e.g., 25 µm) to further improve the z-resolution. Furthermore, we printed models of the human meniscus and ear, which are representative cartilagebased structures. Both of them displayed high fidelity and a smooth surface (Figures and). We further fabricated small-sized complex architectures using a trabecular model (f 3.20 mm ´ h 2.03 mm) with microchannels obtained by the CT scan of a trabecular bone core from a horse femur as described previously. It was found that the microstructure of the bone scaffold was well printed, with minimal feature size down to 50-60 µm (Figure ). The average print size deviation was calculated to be f -
6232f6478ab373c4e667a1c3
13
´ h -0.3%. After equilibrium swelling, the hydrogel model showed excellent surface quality and the minimal feature size was determined to be 80-90 µm (Figures and). Such high-resolution complex hydrogels would be hard to obtain by extrusion 3D printing or conventional "with-water" vat photopolymerization, especially when the water content is relatively high (> 70%).
6232f6478ab373c4e667a1c3
14
Volumetric printing has recently emerged as a powerful technique for rapid 3D (bio)fabrication, which avoids the layer-by-layer stacking and thus, does not require low viscosity of the resin or high mechanical strength of the printed object. Therefore, we tested the printing of PEGDMA 8k at 50% water content on a tomographic volumetric printer. Prior to the printing, the photocrosslinking efficiency of PEGDMA water-borne resins were evaluated by in situ photo-rheology under UV irradiation at 365 nm (Figure ). Although volumetric printing is much faster, when compared to heat-assisted DLP, it was not possible to achieve good printing quality of the bone model. The model was further modified to expand the size of the microchannel by ~4 folds to eventually enable the printing, as shown in Figure . This test further confirmed the great advantage of heat-assisted DLP in highresolution 3D printing of structurally complex hydrogels.
6232f6478ab373c4e667a1c3
15
To explore the potential of our 3D printing approach in biomedical applications, we tested the in vitro cytotoxicity of the 3D printed PEG networks using different resin formulations. The 3D printed discs from PEGDMA 10k -PEGDA 700 (80/20, w/w) or PEGDMA 6k -PEGCOOH (90/10, w/w) were incubated with A549 cells for 48 h using Transwell ® inserts, and the cell viability was determined by the MTS assay and compared to negative control. As shown in Figure , ~100% cell viability was observed for the hydrogel disks, indicating the cytocompatibility of the tested PEG networks.
6232f6478ab373c4e667a1c3
16
Motivated by these results, we further produced the bone-mimicking hydrogel scaffolds printed from PEGDMA 10k -PEGDA 700 (90/10, w/w), and evaluated their bioactivity for tissue engineering applications (Figure ). The scaffolds were scanned by micro-computed tomography (micro-CT) at high resolution (17 µm). A biomimetic porous architecture (Figures and) with a pore size in the range of 100-500 µm was achieved. We reasoned that a small amount of the residual methacrylate groups (5-10%) on a pre-formed hydrogel should be enough for covalent fixation of cell adhesive motifs, despite the high double bone conversion of PEGDMA (Figures and). To this, fibronectinderived cysteine-containing arginylglycylaspartic acid (RGD) peptides (N-C: CGRGDS) were conjugated to the scaffold surface via Michael addition between thiols and the residual methacrylates at a concentration of 10 mM (Figure ), using a reported method. After surface functionalization, the scaffolds were thoroughly washed to remove unreacted RGD since soluble RGD has been shown to inhibit cell adhesion. For cell seeding experiments, human mesenchymal stem cells (hMSC) were selected as the precursor for in vitro bone tissue formation. However, initial attempts for direct hMSC seeding atop PEG scaffolds were unsuccessful since cells rapidly sedimented to the bottom through the pore space due to gravity. To better control the spatial distribution of cells in the scaffolds, we used type I collagen hydrogel as a temporal supportive matrix. Notably, we found that an optimal concentration of collagen is the key to efficient cell penetration and spatial distribution within the scaffold. A cell seeding using a soft collagen (1.0 mg mL -1 ) enabled efficient cell penetration into the pore space. Initially, cells were embedded in a 3D environment. Over time, they migrated out and attached to the scaffold surface (Figures and). Moreover, increasing the cell seeding density from 0.7 Mio mL -1 to 3 Mio mL -1 promoted faster migration of the cells from the collagen matrix. A live-dead assay confirmed that the cells after seeding at such high density were highly viable at day 7 (95.3% ± 4.4%, Figure ), indicating excellent cell-compatibility of the scaffolds. Some of the cells spread on the scaffold surface, exhibited osteoblast-like morphologies, and lined up to form a monolayer that mimics bone-lining cells (Figure ). This was also observed in the confocal microscopic image of seeded cells stained for actin and nuclei (Figure ).
6232f6478ab373c4e667a1c3
17
We introduced heat-assisted DLP technique to the 3D printing of geometrically complex PEG-based networks and the subsequent hydrogels after swelling. Using BAPO-conjugated 4-arm PEG as the photoinitiator, a series of PEG macromonomers with MW ranging from 2000 to 20 000 g mol -1 were printed by DLP at elevated temperature. The 3D printed PEG networks showed MW-dependent mechanical properties and high MW PEG (6000 -10,000 g mol -1 ) provided much higher mechanical strength and elastic modulus compared to commercial PEGDA (700 g mol -1 ). Thanks to the semicrystallinity of PEG polymers, 3D printed objects showed shape-memory behavior. After postprinting swelling, the corresponding hydrogels also showed MW-dependent elasticity, while the 4-arm polymeric photoinitiator resulted in higher strength of low MW PEG (700 -4000 g mol -1 ) hydrogels compared to that of the hydrogels using commercial BAPO. Importantly, a dual-macromonomer strategy allowed the 3D printing of mechanically tough hydrogels by combining high and low MW PEG macromonomers. Using this technique, tough cytocompatible hydrogels with bone-like structures were fabricated and seeded with cells for potential application in tissue engineering.
6232f6478ab373c4e667a1c3
18
We envisage that PEG-based biodegradable hydrogels may also be obtained by heat-assisted DLP using various PEG-polyester block photopolymers which would provide even stronger mechanical properties. Moreover, the 3D printed PEG hydrogels may be further toughened by introducing a secondary network post-printing, depending on the specific application. In principle most photocrosslinkable synthetic or natural polymers with a suitable meting point (e.g., 40-70 °C) would be suitable for hydrogel fabrication using heat-assisted DLP, which may greatly expand the scope of hydrogel additive manufacturing. Our approach brings new perspectives for the high-resolution 3D printing of geometrically complex PEG hydrogels, which may advance the fabrication of personalized bioactive scaffolds and medical implants.
6232f6478ab373c4e667a1c3
19
Synthesis of PEGDMA. PEG polymers were first dried under vacuum at 70 °C overnight prior to the reaction. Take PEGDMA 6k as a representative example: PEG 6000 (60.0 g, 0.01 mol) was dissolved in 200 mL anhydrous DCM in a round bottom flask, followed by the addition of Et3N (4.2 mL, 0.03 mol).
6232f6478ab373c4e667a1c3
20
The solution was purged with nitrogen for 15 min. Next, methacryloyl chloride (3.0 mL, 0.03 mol) was added dropwise to the solution under stirring and cooling in an ice bath. The ice bath was removed after the heating discontinued and the reaction was running for 3 days at room temperature. Afterwards, the reaction mixture was filtered and the supernatant was purified by alumina column. With vitamin E (0.18 g) added to prevent premature cross-linking, the solution was concentrated and precipitated in diethyl ether twice. After drying under vacuum overnight, 51.0 g of a white powder was obtained. The methacrylation conversion was determined to be 82% by 1 H NMR spectroscopy. In some cases (e.g., PEGDMA 2k and 4k), extraction was used instead of alumina column purification.
6232f6478ab373c4e667a1c3
21
Dried under vacuum at 70 °C overnight, PEG 6000 (20.0 g, 3.3 mmol) was dissolved in 60 mL anhydrous DCM in a round bottom flask, followed by the addition of Et3N (2.4 mL, 16.7 mmol) and DMAP (0.4 g, 3.3 mmol). The solution was then purged with nitrogen for 15 min. Next, succinic anhydride (1.69 g, 16.7 mmol) was added in portions and the reaction was left for 24 h. The mixture was washed with 1M HCl solution and water, and dried over anhydrous MgSO4. The solution was then concentrated and precipitated in diethyl ether twice. The obtained powder was then dried under vacuum overnight to give 15.6 g of the product. The degree of functionalization was determined to be 95% by 1 H NMR spectroscopy.
6232f6478ab373c4e667a1c3
22
Synthesis of BAPO-PEG. The 4-arm PEG 10,000 was first methacrylated to PEG 10,000 tetramethacrylate (PEGTMA) using the same method as for PEGDMA. The 4-arm PEGTMA (4.0 g, 0.4 mmol) was dissolved in 30 mL ethanol in a Schlenk flask, followed by the addition of bis(mesitoyl)phosphane (BAP-H, 0.63g, 1.9 mmol) and KOtBu (0.02 g, 0.18 mmol). The phospha-Michael addition reaction was running for 5 days at 60 °C under argon. The reaction mixture was neutralized with 2M HCl solution and oxidized with 0.6 mL tert-butyl hydroperoxide (TBHP) solution (5.5M in decane) at room temperature. From this stage onwards, all following steps were conducted under exclusion of light. The mixture was purified by alumina column using DCM as an eluent. After drying under vacuum, 3.5 g of light-yellow powder was obtained.
6232f6478ab373c4e667a1c3
23
Differential scanning calorimetry (DSC) analysis was performed using TA Q200 DSC (TA Instruments-Waters LLC). The samples (ca. 10 mg) were placed on Tzero hermetic pans (TA Instruments-Waters LLC) and exposed to heat-cool-heat cycles from -80 to 200° C under nitrogen flow (50 mL min -1) using heating and cooling rates of 10 °C min -1 . Data were analyzed using TA Instruments Universal Analysis 2000 software (5.5.3). Fourier-transform infrared (FTIR) spectra were recorded on a Perkin-Elmer Spectrum 65 (Perkin-Elmer Corporation) in transmission mode in the range of 600 to 4000 cm -1 . Viscosity measurements were performed using HAAKE RheoStress 600 rotational rheometer (Thermo Electron Corporation) with cone and plate geometry (35 mm/2º). Viscosity was determined at a shear rate of 100 s -1 in the temperature range of 70-100 ºC, applying temperature ramp of 0.05 ºC s -1 or -0.05 ºC s -1 , with Thermogap function enabled.
6232f6478ab373c4e667a1c3
24
The STL files for 3D printing were downloaded from Thingiverse and Allevi3d or designed with Tinkercad from Autodesk, Inc. A commercial DLP 3D printer (Asiga PICO2) comprising the LED light source of 405 nm with customized resin tray and printing head with heating functions, was used to fabricate all the objects. Post-curing of the 3D-printed products was conducted using a UV chamber from CL-1000 Ultraviolet Crosslinker from UVP (Ultra-Violet Products Ltd.).
6232f6478ab373c4e667a1c3
25
DLP resins were prepared by mixing the macromonomers (PEGDMA or PEGDA), BAPO-PEG (2.0 wt%) or BAPO-PEG-LN (1.0 wt%) or BAPO (1.0 wt%), Sudan I (0.03 wt%), and vitamin E (0.3 wt%). The resins were sonicated at 70 ºC until homogenous mixture was obtained. The printing was performed at temperature of 90 °C (40 °C for PEGDA 700), with layer thickness of 50 µm and exposure time of 3-6 s. The light intensity of the printer LED was 25.67 mW cm -2 . After the printing, the printed objects were cleaned in acetone and 2-propanol, and then cured in the UV chamber for 15 min. In the case of "with-water" printing, LAP (1.0 wt%) was used as the photoinitiator and NVP (8.0 wt%) was used to dissolve Sudan I before adding it to the photopolymers.
6232f6478ab373c4e667a1c3
26
Rheological measurements of the resins were performed on a modular photo-rheometer MCR 302 (Anton Paar) using a parallel plate with 20 mm diameter. To mimic the volumetric printing conditions, the temperature was set to 4 °C. During time sweep measurements with an interval of 6 s for 5 min, UV curing of the resins was induced after 60 s by illumination with an UV-LED lamp (Thorlabs, λ = 365 nm, light intensity = 10 mW cm -2 ) with 45 μL of resin and the gap set to 0.1 mm. The storage moduli (G') and loss moduli (G") were recorded to assess the crosslinking ability and terminal stiffness of the used resins. To prevent drying, wet tissue paper was placed within the temperature chamber.
6232f6478ab373c4e667a1c3
27
The 3D printing resin was irradiated with a round spot shape LED light (405 nm) on the 3D printer with different exposure times ranged from 2 to 20 s, and the thickness of the crosslinked layer was measured using a caliper. Penetration depth (Dp) was calculated according to the Jacobs' equation based on Beer-Lambert law 77 (Equation ):
6232f6478ab373c4e667a1c3
28
where Cd is the depth/thickness of cured resin, E0 is the energy of light at the surface, and Ec is the "critical" energy required to initiate polymerization. A semilog plot of Cd vs. E0 produces a straightline curve with a slope of Dp and a x-intercept of Ec. Exposure time was chosen based on Dp and Ec related to the desired part properties.
6232f6478ab373c4e667a1c3
29
For volumetric printing, a Tomolite printer from Readily3D and the updated Apparite software were used. To optimize printing parameters towards clearly defined constructs, a laser dose test was conducted for each resin. Defined spots were created on the wall of a quartz cuvette filled with a resin by the laser beam (λ = 405 nm) for varying exposure times ranging from 4 to 16 s and varying average light intensities ranging from 4 to 16 mW cm -². The light dose threshold (mJ cm -²) required for precise photopolymerization and minimal off-target exposure, was calculated by multiplying exposure time with the average light intensity of the weakly visible polymerized spots in the cuvette. Construct printing was performed in a glass vial with 10 mm diameter in 1 mL of resin. The printed objects were washed by PBS pre-warmed to 37 °C.
6232f6478ab373c4e667a1c3
30
Tensile and compression tests were performed using TA.XTplus texture analyzer (Stable Micro Systems). Tensile tests were carried out on dog-bone shaped 3D printed specimens (ASTM 638 type IV) with a gauge length of 13 mm at a rate of 0.15 mm s -1 . Every material was tested in at least triplicate.
6232f6478ab373c4e667a1c3
31
The test was performed in triplicates, with three samples of 3D printed discs (H 0.8 mm, Ø 5.5 mm) for each replicate. The 3D printed discs were washed with acetone for 30 min, followed with PBS pH 7.4 overnight, dried under vacuum for 24 h at room temperature, cured in UV chamber for 20 min, and soaked in medium for 20 min before the incubation. A549 cells were seeded in a 24-well plate with seeding density of 50,000 cells per well. The 3D printed discs were then put into the wells on the top of Transwell ® inserts. As positive control, cells were incubated in medium with 100 mM H2O2 while medium alone was used as a negative control. Cell viability was determined by the MTS assay after 48 ± 1 h of incubation (37 ± 1 °C, 5% CO2). The Transwell ® supports and medium were removed, the wells were washed with PBS and MTS reagent was added into the wells. Absorbance was measured after 45-60 min of incubation (until A=0.6-0.8) using a spectrophotometric plate reader (490 nm, TPP24fT, without lid, linear shaking 10 s, amplitude 1.5 mm, number of flashes 25, settle time 5 ms).
6232f6478ab373c4e667a1c3
32
3D printed scaffolds were scanned at high resolution (17 μm) on a microCT 45 (Scanco Medical AG) operated at an energy of 45 kilovolt peak (kVp) an intensity of 177 µA with an integration time of 595 ms and frame averaging of 1. Using the Image Processing Language (IPL) software (Scanco Medical AG), scaffolds were segmented from background using a global threshold. After reconstruction, a 3D
6232f6478ab373c4e667a1c3
33
Small 3D printed femur models (d = 3.20 mm, h = 2.03 mm; n = 3) were sequentially sterilized by UVA irradiation and 70% ethanol for 15 min each. After washing 3 times in PBS, they were functionalized with a fibronectin-derived arginylglycylaspartic acid (RGD) peptide (China Peptides, N-C: CGRGDS) at 10 mM in PBS (pH 7.94) to promote cell attachment. Samples were incubated in this solution at 37 ˚C overnight before washing 3 times with PBS to remove unreacted RGD. For 3D culture, a collagen type I hydrogel was prepared from an 8.91 mg mL -1 stock solution (rat-tail, Corning) as described by Shin et al. Briefly, the collagen stock solution was mixed with 10% of 10× PBS and cells in osteogenic medium to obtain a final collagen concentration of 1 mg mL -1 and a cell concentration of 3 × 10 6 mL -1 .
6232f6478ab373c4e667a1c3
34
For live/dead cell imaging, two scaffolds were washed in PBS and stained for 15 min in a solution containing Calcein Green AM (CaAM, 1:500, Sigma-Aldrich) and Ethidium-homodimer- The compressive toughness (Tc) was obtained as the integration area of the compression stress-strain curves. d Full stress-strain curves could not be obtained from the compression test of PEGDA 700 hydrogels due to their extremely brittle nature. e Triplicates could not be obtained from the tensile test of PEGDA 700 -BAPO/NVP hydrogels due to their extremely brittle nature.
65fb2c4ce9ebbb4db92085fd
0
Ionic liquids (IL), i.e., organic salts with a melting point below or about room temperature (< 100 °C), have been widely used in analytical chemistry in last decades . IL are stable, nonvolatile, and liquid in a wide temperature range. Some IL form stable thin films. This makes it possible to use them as liquid stationary phases (SP) for gas chromatography (GC). In this case, IL demonstrate high polarity simultaneously with excellent thermal stability . IL are widely used for the separation of various mixtures . The selectivity and retention behavior of various IL were reviewed by Yao et al. . Various IL are used as gas chromatographic SP: for instance, derivatives of imidazolium, phosphonium, pyridinium, and guanidinium can be employed . The structures of various IL-based SP are reviewed in Ref. . Several types of IL-based GC columns are commercially distributed by Supelco (owned by Merck Group). These columns are used for various separations .
65fb2c4ce9ebbb4db92085fd
1
For the use in gas chromatography -mass spectrometry (GC-MS), SP should be particularly thermostable and non-volatile in order to provide low background noise. For less volatile, heavy, and polar analytes, the SP have to be stable at higher temperatures. In Ref. , it was demonstrated that some imidazolium-based IL can be used for GC-MS at temperatures up to 300 °C and have background noise considerably lower than polyethylene glycol-based SP (PEG) and comparable with the non-polar HP-5ms SP.
65fb2c4ce9ebbb4db92085fd
2
Methods that predict chromatographic retention using the analyte structure as an input are usually referred to as quantitative structure-retention relationships (QSRR) . One of the application areas of this method is the non-target GC-MS analysis using a mass spectral library search for rejection of false candidates. QSRR can be considered as a method that provides an insight into chromatographic separation . When predicting a retention index (RI) based on some molecular descriptors (i.e., numerical values that characterize the structure of a molecule), the contribution of particular molecular descriptors (MD) and a set of selected MD can provide valuable information about the nature of separation, and the model is considered as an interpretable one . Almost all work on QSRR for GC is limited to the most typical and well-characterized polymeric SP. In liquid chromatography conditions, more factors influence retention and the use of QSRR to study the separation mechanism is even more common . QSSR are also used as a convenient task in order to develop and demonstrate chemometric, statistical, and machine learning methods.
65fb2c4ce9ebbb4db92085fd
3
Many hundreds of MD are available by the means of commercial and open-source software . Various types of MD and their use in QSRR in GC-MS are reviewed in Ref. . Diverse machine learning methods (such as support vector machines , gradient boosting , neural networks ) are used for QSRR. But the most often used are the linear regression methods . Various feature selection approaches can be used in quantitative structure-property research (in particular in QSRR) . Feature selection is especially important when an interpretable model with chemical meaning is required.
65fb2c4ce9ebbb4db92085fd
4
Despite the existence of a large number of QSRR studies, most of them use small data sets (less than 1000 compounds) and usually do not answer whether the obtained results will be reproducible if the data set is slightly changed. For example, in Ref. , the authors make some qualitative conclusions about retention based on a set of MD chosen using sequential selection. The authors do not study whether the MD selection procedure is reproducible and whether the same MD set will be chosen if the data set is slightly distorted. If a method is unstable to insignificant changes in the data set and random factors, it may lead to misleading conclusions.
65fb2c4ce9ebbb4db92085fd
5
Any QSRR study requires a large enough data set of retention values (retention time (RT) or RI) of diverse compounds, and the diversity of data sets affects the results . To the best of our knowledge, such data sets are not available for IL-based SP. For each of SP, the data about the retention are available for a very small number of compounds. Usually these are data about test mixtures for determination of polarity or solvent parameters, or data about several very similar compounds. To the best of our knowledge, there are very few works about the RI prediction and QSRR for IL-based SP, and all of them are focused on one specific class of chemical compounds. In Ref. , QSRR for polychlorinated biphenyls and IL-based SP are considered. There are also some works that predict the chromatographic properties of IL based on their structure, rather than predict the retention for a given IL based on the structure of the analyte. We focus on the latter task: to predict the retention of diverse compounds on a given IL-based SP.
65fb2c4ce9ebbb4db92085fd
6
The majority of previous works devoted to RI prediction consider polydimethylsiloxane, 5%phenyl-methylpolysiloxane or PEG. For these SP, very large data sets are available. This fact allows for the development of accurate and versatile prediction models and then use the predicted (for these common SP) RI as MD in models developed for other SP. In this work, we investigate whether the predicted RI for PEG is applicable as MD for prediction of RI for IL-based SP.
65fb2c4ce9ebbb4db92085fd
7
Since there are still no large and diverse enough data sets and QSRR studies for such SP, this work is aimed to fill this gap by constructing a moderately large structurally diverse retention data set of compounds of various classes for IL-based SP and providing the QSRR study using this data set.
65fb2c4ce9ebbb4db92085fd
8
Experimental RT and RI were acquired for three promising monocationic and dicationic IL-based SP containing polysubstituted pyridinium cations. This work is also aimed to pay special attention to reliability and reproducibility of the QSRR study. We tested whether small distortions of data sets, such as randomly removing several compounds or adding minor noise to the values, could affect the conclusions of the QSRR study.
65fb2c4ce9ebbb4db92085fd
9
A collection of 181 organic compounds of diverse chemical nature was used: aromatic and aliphatic alcohols, aldehydes, ketones, heterocycles, and various halogenated compounds. A full list of compounds is provided in Supplementary Material, section S1, and all experimental RT and RI are provided in the online repository . Most of the compounds were purchased from Sigma-Aldrich and several from other vendors. The purity of each compound and the correctness of the structure were checked by GC-MS (electron ionization) using matching of observed spectra with spectra from a mass spectral database and matching of RI on standard polar and non-polar SP with reference ones (when available). The NIST 17 database was used for this purpose. A standard mixture of n-alkanes C 7 -C 40 (1000 μg/ml of each component in hexane, Sigma-Aldrich) was used for determination of n-alkanes RI. Acetonitrile (UHPLC-Supergradient PAI-ACS, Panreac) was used to dissolve standard compounds.
65fb2c4ce9ebbb4db92085fd
10
1 μl of liquid analytes was dissolved in 0.9 ml of acetonitrile. 1.5 mg of solid analytes was dissolved in 1 ml of acetonitrile. Analyses were carried out using Shimadzu GCMS-TQ8040 (Shimadzu). We mixed up to 10 compounds in one solution (partial concentrations are given above), and in those cases where the peak annotation was not absolutely unambiguous, we remeasured solutions of individual compounds. We measured all compounds using three columns with IL (see below), as well as HP-5 (30 m, 0.32 mm×0.25 μm, Agilent) and SH-Stabilwax (30 m, 0.25 mm×0.1 μm, Shimadzu) columns. The numbers in brackets denote the length, inner diameter of the column, and thickness of the SP layer, respectively. Measurements were made for standard polar and non-polar SP in order to obtain spectra for comparison, as well as to verify that the observed RI match the reference ones. 0.5 μl of the liquid solution was injected to the GC-MS instrument; in order to measure n-alkane RI, a mixture of n-alkanes was added to the sample solution.
65fb2c4ce9ebbb4db92085fd
11
GC-MS analyses were carried out under the following conditions. Temperatures of injector and ion source: 250 °C and 200 °C, respectively; carrier gas: He; flow control mode: constant linear velocity; flow rate: 0.6 ml/min; injection split ratio: 1:50. Oven temperatures were programmed as follows: the temperature was raised from 50 °C to 240 °C at 8 °C/min rate and then was kept constant during 15 min. The mass spectrometer was operated in electron ionization (EI) mode at 70 eV, scan rate: 1666 units/s, mass range: 44-500 m/z.
65fb2c4ce9ebbb4db92085fd
12
Three IL-based GC columns were used: Bis4MPyC6 (30 m, 0.22 mm×0.2 μm), Bis2MPyC9 (25 m, 0.22 mm×0.2 μm), Hex4MPy (18 m, 0.22 mm×0.2 μm). The structures of IL used in these columns are shown in Fig. . IL were prepared according to the procedure from Ref. . Cations (in the form of bromide) were prepared by heating a mixture of corresponding methylpyridine and bromo-or dibromoalkane at 120 °C during 2-6 hours. IL were prepared by the reaction of previously produced bromide with lithium bis(trifluoromethanesulfonyl)imide. The columns were prepared by the static high pressure technique at a constant temperature of 210 °C using tert-butanol as a solvent. The column preparation procedure is described in Ref. .
65fb2c4ce9ebbb4db92085fd
13
The predicted based on the molecule structure RI for PEG was used as the MD for further prediction of RI for IL-based SP (see Fig. ). So, this is a supplementary task for this work. We used two methods for prediction of RI for PEG. The first one is the use of a quite accurate deep learning model, previously described in our previous works . In this case, a multimodal ensemble of two deep neural networks was used. The neural networks were trained using the NIST 17 database. The models use SMILES string representations of models, various MD, and molecular fingerprints used as an input representation of molecules. The models were described in the previous work and used in the unchanged form. The newly developed and described in this work CHERESHNYA software calls our previous software for prediction of RI for the DB-WAX column. This predicted RI value is further referred as the RI_PEG_DL descriptor. The second approach is a linear model for prediction of RI on PEG. The following set of features was used: 243 2D MD and 84 functional groups counters generated using the Chemistry Development Kit, version 2.7.1 (CDK) ; 208 2D MD of various types and 42 MQN (so-called Molecular Quantum Numbers ) generated using the RDKit library, version 2023.09.4; 4860 Klekota-Roth substructure counters (Klekota-Roth counting fingerprint ). The first subset of features (functional groups counters and CDK descriptors) was the same as used in our previous work . All MD were scaled to the range [0; 1]:
65fb2c4ce9ebbb4db92085fd
14
The NIST 2017 library was used as the training set. Preprocessing of the library is described in our previous work , and unsupported compounds were excluded as described there. For each compound, the median value of all values for PEG was used. The compounds that were also measured In the SEQ_ADD method, at the first stage, the most correlated with the target RI values MD is selected. Then for each of MD that have not yet been selected, the ordinary least squares (OLS) model is built and the MD for which the f-factor (goodness of fit) is the largest is selected. In this method, the f-factor was calculated using the following equation: