id
stringlengths 24
24
| idx
int64 0
402
| paragraph
stringlengths 106
17.2k
|
---|---|---|
65e0f5d466c1381729e12e1a | 133 | The computational funnel model extends high-throughput studies to complex materials properties by screening out materials after each step according to selection criteria based on the expected result. In theory, this means that the costliest calculations or experiments are done only for the most promising candidates. Naturally, this process is the more successful, the faster and the more reliable undesired materials can be disregarded. |
65e0f5d466c1381729e12e1a | 134 | Active learning provides one way of combining AI models and high-throughput workflows The goal of this algorithm is to balance exploitation and exploration to select new data points that can optimize a property or better train an AI model for a given material property, while simultaneously finding a global optimum for it in a data-efficient manner. By using an acquisition function that balances exploration and exploitation to select which materials to calculate next, these frameworks can improve the efficiency of the studies by selecting which materials enter the funnel in a non-subjective manner (see the roadmap paper by Boley et al.). In essence, both the active learning framework and the selection funnels attempt to achieve the same goal, that can be achieved synergistically: Active learning suggests which materials enter the workflows and the high-throughput funnel removes the unpromising candidates after each step of the workflow. |
65e0f5d466c1381729e12e1a | 135 | The main challenge in fully realising the potential of AI-guided workflows is integrating active learning schemes, and the AI models and the suited acquisition function they are based on, into advanced selection funnels. The criteria used for each step of the funnels are either based on an expected error bound of a lower accuracy calculation or a physics-informed heuristic, e.g., a material having too large of an electronic band gap or being too dense. The end goal of the screening criteria is to reduce the overall cost and time of a study, while still exploring over the relevant parts of materials space. |
65e0f5d466c1381729e12e1a | 136 | While useful, the current screening criteria are a potential obstacle when combining active learning with high-throughput workflows. Because they are not necessarily derived from the data that underlies the AImodel, an overzealous screening procedure can exclude materials that would drastically improve model performance and possibly correct an initial bias. Importantly, the heuristics used to screen out materials may not directly relate to the target property, but be controlled by an unknown third process, leading to an incorrect physical interpretation. Furthermore, adding selection funnels to active learning frameworks could perpetuate the initial bias of the models as the dataset will be directed towards the existing conditions. One potential solution to this problem is through using multi-objective learning to simultaneously optimize both the screening criteria and the target property. However, a less complex solution would be preferable. |
65e0f5d466c1381729e12e1a | 137 | The final challenge with creating these workflows is to incorporate them into existing materials discovery frameworks. Currently, the tools used for materials discovery such as AFLOW , atomate , and Aiida . Without native integration, multiple, potentially incompatible solutions must be created leading to a less transparent ecosystem. An additional benefit of fully integrating these methods is an improved selection procedure. The use of cost-aware and efficient acquisition functions is becoming increasingly popular, and including the AI model training and selection steps inside the workflow libraries themselves will improve the estimated costs for these acquisition functions and multi-fidelity approaches. An expanded use of explainable AI methods presents a clear path to achieve the necessary combination of methods presented above. By learning the conditions for screening out materials from the AI models themselves, explainable AI attempts to expands the predictive power of machine learning models, and give insights into the relationship between the input physico-chemical materials features and target properties. These methods can relate to either the regression method used, e.g. linear or symbolic regression, or post-processing techniques that uncover the relationships. By better understanding the connections between the input features and a target property, one can then replace the physics-derived heuristics with ones from the model itself. |
65e0f5d466c1381729e12e1a | 138 | We recently demonstrated the capabilities of this approach, by creating an AI-guided workflow for finding thermal insulators . For this project we modelled the thermal conductivity, κL, of a material based on its structural, harmonic, and the anharmonic properties (see for a complete list). We then applied feature importance metrics, and found only three inputs were important. From here we were able to map the expected value of κL against each of these inputs to find the screening procedure highlighted in Figure . With this workflow we were able to efficiently find 16 predicted ultra-thermal insulators with a κL less than 1 W/mK out of an initial set of 732 materials . |
65e0f5d466c1381729e12e1a | 139 | To fully address the challenges associated with creating sustainable, AI-guided workflows, active learning techniques must be integrated into them. While the selection funnels can find a list of hundreds of possible candidate materials, it cannot identify which predictions are the most important to validate next. However, introducing an acquisition function the workflows can then maximize the quality of information gained per calculation or experiment. In turn this will allow us to speed up the discovery of good materials for vital applications. More importantly, by redoing the feature importance study after each iteration we can further refine the screening criteria and continue calculations that were initially discarded because they broke one of the old metrics. |
65e0f5d466c1381729e12e1a | 140 | AI-guided workflows have the potential to revolutionise materials discovery frameworks by focusing calculations or experiments on the most promising materials, and potentially remove the initial bias of data selection. By using an appropriate acquisition function to determine which experiments or computations to run next, we can automate these calculations. In turn the focus of the researchers working on these problems can instead be on further developing new methods and not managing a large set of calculations. Furthermore, explainable-AI methods will help elucidate why the models are deciding which candidates to calculate next. With this insight, part of the physical mechanisms driving, facilitating, or hindering the different processes may also be understood. Finally, as the frameworks become better focused the overall efficiency of these efforts will be significantly enhanced. |
65e0f5d466c1381729e12e1a | 141 | Recent developments in aberration-corrected electron optics, spectrometer and detector technologies enable to capture multimodal signals within a single experiment in a scanning / transmission electron microscope (S/TEM) down to the atomic level. These advancements have greatly expanded our understanding of the atomic constitution of materials, which is largely driven by the rich and multimodal data streams. Spectroscopic techniques such as energy dispersive X-ray (EDS) or electron energy-loss spectroscopy (EELS) can nowadays probe the local composition and electronic structure of complex materials at the atomic level. New scanning diffraction methods, termed 4D-STEM, capture 2D electron diffraction patterns in each probe position of the 2D raster grid and have facilitated to image light elements at atomic resolution, determine local structures and strain with sub-nanometer precision . Further, the spectroscopic and 4D-STEM techniques can be combined with tomographic approaches to obtain the 3D nature of materials. Advances in in situ probing capabilities and fast electron detectors make it possible to directly observe the dynamic evolution of materials under different external stimuli with high spatial and temporal resolution. The common theme of these techniques is that nowadays the experimental data is often represented as a three-or higher-dimensional data set as shown in Fig. (left) . |
65e0f5d466c1381729e12e1a | 142 | The ever-growing data complexity, size, and speed at which it is created in experiment renders humanbased analysis not only impractical, but also largely limits the discovery of latent features, which often equip a material with a certain functionality . This has stimulated the development of automated computer-based and machine-learning analysis algorithms to harvest the rich information contained in the data and to turn the data into interpretable physical quantities . For example, principle component analysis and clustering were employed to automatically separate different phases in a bismuth ferrite sample at atomic resolution obtained from a multi-gigabyte 4D-STEM data set . The development of open-source-based data-analysis tools has been paramount for treating and interpreting multidimensional and large-scale data sets from different microscope manufacturers in an efficient manner and provide flexible platforms towards on-the-fly data analysis even of big data sets . |
65e0f5d466c1381729e12e1a | 143 | 1. handling, storage, and labeling of the data to enable reproducible data analysis 2. human-based data analysis often largely exceeds experimental time frames 3. limited interdigitation of data acquisition and analysis 4. lack of automated or autonomous data analysis tools These technical restrictions often directly compromise material characterization and with this new material discoveries. One of the greatest challenges is the interdigitation of data-stream generation in a microscopy experiment and its direct analysis to provide live feedback to the researcher. Different approaches can be envisioned here where parallelized high-performance computation (HPC) utilizing modern graphical processing unit (GPU) capabilities is directly performed at the microscope computer or edge computing in a distributed system, where the HPC tasks are performed either on cloud servers or at HPC centres . |
65e0f5d466c1381729e12e1a | 144 | The broad variety of data streams utilized to probe materials ranging from simple 2D images, to 3D or higher dimensional hyperspectral data sets, to time series probing material evolution or 3D tomographic reconstructions require the development of versatile and autonomous data analysis algorithms. Typically, advanced algorithms to reduce the dimensionality of hyperspectral data, segment or recognize patterns in images, and classify features in multidimensional data sets are employed as separate or sequential instances as shown in Fig. (middle) , . It has been shown that unsupervised machine learning is capable to automatically segment different crystalline regions in atomic resolution images and video sequences solely based on crystal structure symmetry without requiring prior knowledge on the underlying structure . Using a trained Bayesian deep neural network , it is even possible to classify crystal structures in atomically resolved images and identify defective regions or interfaces by considering the uncertainty in the prediction . In a future direction, one would envision that novel big-data and machine-learning algorithms will be integrated in hybrid algorithm architectures that perform automatic or even autonomous tasks. |
65e0f5d466c1381729e12e1a | 145 | Advances in computing architectures for microscope laboratories are one side of the coin, but integrated or hybrid machine learning based algorithms need to be deployed alongside to enable automatic analysis of large-scale data. Recent developments in machine-learning and in particular deep-learning approaches in electron microscopy hold great promise for laying the foundation for autonomous data-analysis and electron-microscope operation , . Ultimately, the aim is to enable the discovery of new material phenomena and to probe the physical properties of materials and their evolution with atomic precision. Since the physical nature of electron wave propagation and interaction in a crystalline material is well understood, ground truth training data for a deep learning model can be efficiently generated . However, a large deep-learning model would need to contain information not only of all known crystal structures and phases, but more importantly of different point, line, or planar defect types. Recognizing defects from supervised learning, however, is nearly impossible to achieve at the day of writing, since the atomic configurations existing in nature are not necessarily known or understood. Instead, a convolutional neural network can be trained on simulated images of pristine crystal structures, while still localizing and obtaining information on material imperfections . Figure shows the neural network representations obtained after dimension reduction of the fully connected layer before the classification and the corresponding uncertainty of the prediction. Although the model was trained on pristine crystal structures, it is capable to distinguish the different types of interfaces (here: grain boundaries) and the model uncertainty provides an indirect way to locate material imperfections. Approaches combining supervised, unsupervised and active learning are needed to further explore regions in data sets with high uncertainty, which may represent an unknown interface structure or surface configuration. Furthermore, the classification tasks have to be extended to also consider local composition and electronic structure to fully exploit the data and yield a holistic picture of the physical nature of a material on the atomic level. Future models should enable live feedback at high time resolution to facilitate autonomous steering of the experiment and consider active re-training to include disturbed or unknown atomic structures. |
65e0f5d466c1381729e12e1a | 146 | Big data in electron microscopy is already a reality and will play an increasing role in the future not only for the sake of data acquisition, but to holistically characterize every single atom in a material paving the way for atomic scale materials discovery. Spectroscopic and scanning diffraction data sets (e.g. 4D-STEM) contain information on the elemental nature, the electronic and 3D structure of a material and hence this information needs to be fully harvested. Technological advancements in computing infrastructure have to be developed in parallel with hybrid machine learning algorithms in electron microscopy laboratories to move away from incremental experimentation. Combinations of unsupervised and supervised learning approaches have the potential to automatically identify and label different crystal structures and atomic species in complex data sets and will eventually uncover latent patterns in an automatic fashion. This will guide scientists to interesting regions in a sample and accelerates the deployment of physical material models. |
65e0f5d466c1381729e12e1a | 147 | Here, ML has become a game changer for post-acquisition data analysis, such as image reconstruction , improvement of data by denoising and resolution enhancement and structure recognition . One of the bottle-necks for the efficiency with which electron microscopes can generate materials knowledge is also the investment of time and human, highly microscope-specific expertise required to align the instrument for optimal performance, especially, when the materials question to be solved requires switching between different modes of operation. While some big-data-trained ML models have already demonstrated to be capable of measuring aberrations very quickly , they are not yet capable of handling the complexity of a modern microscope which, for some instruments, requires managing more than 500 current supplies. A few groups are also applying ML methods towards real-time data analysis and automating experiments as illustrated in Fig. for the case of STEM . In contrast to the field of cryo-electron microscopy, where fully automated experiments can run for multiple days by repeating the same image acquisition process for automatically exchanged samples at many pre-defined sample positions, the complexity of adaptive ML-driven experiments in materials science (S)TEM experiments is much higher, given the inhomogeneity of most samples, the wide variety of signals to choose from and switch between, and the sequential process with which the data is acquired. Conventional ML methods used on already acquired data sets can simply not be applied one-to-one. New ML approaches for real-time applications in electron microscopy are still rare and most notably their onthe-fly implementation on the microscope has so far not been realized . The controlled components shown here are the scan coils and a programmable phase plate (inspired by the commercially available design by adaptem.eu), but it can also be lens currents, aberration-corrector settings, etc. |
65e0f5d466c1381729e12e1a | 148 | In addition to the requirement for very fast data processing and fast access to electron optical components of the microscope, method developments will also need to consider the following two crucial components: The first key component is the fast handling of huge microscopy data. Electron microscopes can nowadays acquire several GBs of data within seconds which, means that ML methods for real-time applications should be capable of processing huge data sets within a fraction of a second. Obviously, a tight integration between hardware and software will play a crucial part in the solution to this problem. Edge computing and camera integrated compression techniques are here just two examples to be mentioned. Another important component for the development of new real-time ML methods is a high level of adaptability. The environment in the microscope constantly changes between, but sometimes even during experimental sessions. ML methods need to deal in real-time with data that has been acquired under these circumstances without a significant loss in performance. Furthermore, methods that aim for an automation of the experiment are required to easily adapt to different experimental goals. |
65e0f5d466c1381729e12e1a | 149 | The high complexity and cost of ownership of state-of-the-art electron microscopes allows only a few labs staffed with expert operators who have undergone extensive microcope-specific training to run them. Maximizing these instrument's scientific output per time as well as democratizing access to them calls for improving their user interface in analogy to how modern chatbots have recently started to enable anybody to write complex computer programs. |
65e0f5d466c1381729e12e1a | 150 | Advances in the method development that combines deep learning and reinforcement learning (RL) show promise that dynamic decision-making problems can be solved with a strong performance by a machine alone. Operating an electron microscope in an automated fashion could therefore benefit from this development. A first step towards this direction has been proposed in Ref. , where the combination of deep learning and RL offers the possibility to perform low-dose experiments for electron ptychography through adaptive scanning. A schematic of the adaptive scanning workflow is shown in Figure . The advantage of this approach is that it is highly adaptable to a wide range of scanning microscopy techniques through the modification of a reward function that expresses the research goal. Hence, various imaging and spectroscopy techniques, such as STEM EELS, that have already been shown to benefit from an optimized scanning scheme, could be further advanced through a successful automation of the experiment. But also many other parameters of the experiment, such as adjustable aberrations, lens currents, or the phase shifts in programmable phase plates (see Fig. ) can be optimized to improve the efficiency of the experiment with which a given research question is being addressed. Recent developments in software for processing natural language are likely to result in the highly technical user interface of electron microscopes being extended by chatbots and comparable features. |
65e0f5d466c1381729e12e1a | 151 | In order to deal with 10s of GB of data per scan, it has been shown that compression based on datadependent linear transformations yields superior results when compared to conventional techniques like binning, or singular value decomposition, both in terms of compression ratio and quality of the information extractable from the data-set. The integration of artificial neural network (ANN)-based feature recognition techniques has the potential to further enhance compression performance. Before collecting the main data-set, a network can be pre-trained in a similar way to adaptive scanning , but with the aim of capturing the diffraction patterns as best as possible with as few values as possible. |
65e0f5d466c1381729e12e1a | 152 | Figure Schematic of a ML-driven adaptive scanning workflow for the purpose of optimizing the scanning in a 4D-STEM experiment in real-time. The employed ML methods consist of a convolutional neural network for the atomic structure extraction and a recurrent neural network for the sequential prediction of scan positions. Training of the networks is performed through RL. Reproduced with permission from Springer Nature . |
65e0f5d466c1381729e12e1a | 153 | In summary, big-data-based ML methods have already shown to be very powerful for post-processing tasks of electron microscopy data, but given the high complexity of these microscopes, their application in useful real-time data analysis and experiment automation methods still lags behind. Some initial developments of workflows that leverage ML methods to perform and optimize specific tasks of an electron microscope show promise for transitioning this fully human-controlled instrument to a (partially) autonomously operating machine being capable of carrying out precision measurements in a fully documented and fully reproducible manner. We expect that this development will largely increase the research output obtained from this type of instrumentation. |
65e0f5d466c1381729e12e1a | 154 | Atom probe tomography (APT) is a burgeoning characterization technique that provides compositional mapping of materials in three-dimensions at the near-atomic scale . The data obtained by APT takes the form of a mass spectrum, from which the composition of the analysed material can be extracted, and a point cloud that reflects the distribution of all the elements within the region-of-interest of the material being studied. Material-relevant data must be extracted from this point cloud through the use of data processing or mining techniques. These go from simply the local composition of a phase or a microstructural object, sometimes extracted via cluster-finding or nearest-neighbour algorithms today classified as machine-learning but used in the APT community for many decades . Phase morphology or even partial structural information can be obtained but the information can be limited or distorted because of trajectory aberrations that are caused by heterogenities in the specimen's end shape down to the near-atomic scale. Today, data reconstruction and processing is most often done in commerciallyavailable software, which does not allow for exploiting the cutting-edge methods arising from big data and machine-learning, and also remains very much user-depedent . The enormous potential to mine atom probe data is clear, but this requires complete FAIR-compliant analysis workflows that make use of machine-learning to facilitate more reliable and reproducible data processing and extraction, to really go beyond what human users can achieve. This section reviews challenges of APT data analysis (partially) solved by the application of machine learning and points out the remaining crucial locks to be addressed in the future. |
65e0f5d466c1381729e12e1a | 155 | A critical challenge is that present-day APT data processing tools and workflows are inherited from "traditional" interactive data analysis based on user-interactions, through a fixed set of data analysis techniques and visualization that leaves little flexibility to explore novel and processes, as summarised in Figure . User-input includes assignment of peaks to particular atomic or molecular species to manually retrieved structural information and microstructure segmentation and quantification. Machine-learning has the potential to automate many of these analysis steps, with models that are based on physical input and constraints. Some progress has been made across the community with dedicated machine learning algorithms to mine compositional and structural information . For instance, for mass peak assignment, we introduced an approach that uses known isotopic abundances to identify patterns in mass spectra, outperforming human users without loss of accuracy . Following reconstruction of the 3D point clouds, automated identification and quantification of grain boundaries were proposed, and for more general microstructure segmentation, Saxena et al. introduced an approach that uses clustering in the compositional space, demonstrating unique capabilities for segmentation of the various phases, along with the quantification of their composition and morphologies. These would normally have been extracted through manually positioned regions of interest, which is time-consuming and error-prone. Structural imaging by APT is hindered by the anisotropic spatial resolution and the limited detection efficiency . Recent efforts have managed to overcome these for the ever-challenging analysis of chemical short-range order (CSRO) by using convolutional neural networks, using the workflow in Figure 2 . A key challenge for the future is to move away from the developing individual tools to tackle isolated problems to think about complete data analysis workflows, from patches to a logical patchwork that will also facilitate adoption. A way to solve this would be to open the programs themselves via APIs at all levels, or at least facilitate data exchange through open data formats accessible to external processing by independent tools. |
65e0f5d466c1381729e12e1a | 156 | For APT, post-processing is mostly executed with proprietary software tools, which can often be opaque in their execution and have often limited performance and preclude the facile deployment of novel data processing methods. There is a need to agree on a more opened data format and metadata conventions as a critical prerequisite. Development of machine-learning optimized hardware and software remains plagued by the use of proprietary, specific data format, which limits usage across software, techniques and communities. And this should include the raw data, not only what has already been processed. As such, the community will have push to provide a complete set of tools equivalent to the currently available integrated beginning-to-end workflows, i.e., from an experiment to a publishable image, yet these will have to be open and extendable to include machine-learning steps and fully documented to also include traceable information regarding the sample and the specimen with appropriate metadata. A prerequisite is also the use of open and documented data formats. As a preliminary efforts in this directions, let us mention here Paraprobe , |
65e0f5d466c1381729e12e1a | 157 | that is fully open-source and provides clear documentation of each analysis step for post-processing APT datasets that offers orders of magnitude performance gain, automation, and reproducibility. For now, these open tools are seldom used, and the community seems to wait user-friendly platforms, which so far do not exist. This hinders complete FAIR-workflows that are so far lacking, which precludes direct correlations with other computational or experimental techniques, but also wider meta-analyses as introduced by Meier et al. . Finally, there is a need for a repository of benchmark datasets that would allow to evaluate the performance of new developments in a transparent way across the community. |
65e0f5d466c1381729e12e1a | 158 | Although the above-mentioned efforts have demonstrated the potential for state-of-the-art machine learning to meet existing challenges for APT data processing, some major aspects remain to be tackled to fulfil the full potential. Machine-learning has the potential to help address many of APT shortcomings, and, for instance, resolve some aberrations that plague the accuracy of the measurements by better interfacing with modelling efforts in APT. This is necessary to reach true atomic-resolution that will help extract more precise local atomic arrangements. Optimisation of the data acquisition, and establishing a dialog between the instrument and the data processing are also areas that will need exploring -in this regard, APT is far behind other high end microscopy techniques. Overall, we are only at the beginning of the use of machine-learning for APT, but the preliminary work that has been done across the community lays solid ground to build better, more encompassing and efficient tools in the future. |
65e0f5d466c1381729e12e1a | 159 | Heterogeneous catalysis is vital to sustain humanity and to address important societal challenges such as achieving net zero. Heterogeneous catalysis is also chemically complex, and the realisation of new catalysts is challenging. Catalysts themselves can contain multiple active elements; for example, the catalyst used for the Haber-Bosch process, which is integral to feeding 50% of the global population, typically contains iron, aluminium, calcium, potassium, and oxygen, with activity subtly dependent on composition. The composition of catalytic materials can be explored successfully via data-driven approaches, yet catalytic reactions occur at the surfaces and interfaces of these materials, and therefore the material properties must be investigated also as a function of the reactive surfaces and interacting medium; furthermore, a rational design process must also consider the intricacy of reaction mechanisms to ensure appropriate reactivity and product selectivity, which includes sensitivity to temperature and pressure, to result in truly industrially relevant catalysts. The complexity of such catalytic systems quickly becomes intractable to fully explore with current experimental or computational efforts. |
65e0f5d466c1381729e12e1a | 160 | Historically, catalysts have been identified and their application optimised via empirical investigations, using previous success to guide future decision-making. Such "top-down" experimentation has recently seen the integration of high-throughput experimentation (HTE) into workflows, accelerating catalyst discovery through parallelisation of testing; in the more advanced cases, the HTE is coupled with datadriven analysis of reactivity/selectivity and automation to self-consistently optimise the efficacy of the catalytic system towards a target property, working within a defined parameter space. The current HTE approaches do not typically include advanced in situ or operando characterisation, but these methods are increasingly available separately and benefit from similar emergent capabilities in automated datadriven analysis. |
65e0f5d466c1381729e12e1a | 161 | Alongside experiment, the advancement of computational capabilities allows the "bottom-up" interrogation of elemental and structural knowledge from across the periodic table, presenting significant opportunities for accelerated data-driven discovery. Promising materials can be considered further using parameterised models to explore surface structures and composition as a function of operating conditions, and reaction mechanisms derived using automated construction of chemical reaction networks, providing vast quantities of data relating to a reaction landscape from which rates and product distributions are accessible via kinetic modelling. With the knowledge calculated within this sampling space, the efficacy of the catalytic system can be linked against key "descriptors" of the catalyst and its operating conditions, providing powerful shortcuts when navigating across the reaction landscape to find better catalysts via e.g., active learning protocols. In the most state-of-the-art approaches, descriptors are derived as compound functions of both experimental and computational information, via multi-fidelity data models. Current and Future Challenges Data-driven models are dependent on large, accurate, and complete datasets, yet such experimental data remains challenging to locate, access, use, or reproduce. Historically, the reward structure of the research community has been towards positive results, which means that negative results are not shared. Incomplete datasets lead to sampling bias and inaccuracy of data models; furthermore, hidden data can also present a challenge for reproducibility, whereby not all the experiment parameters are reported for future investigators. Data quantity and quality are also important aspects, yet most experimental investigations typical focus in a small chemical space, which lead to small datasets. Indeed, data completeness can again become challenging when only the "best" catalysts are considered for higherlevel characterisation methods, such as in situ electron microscopy; and simultaneously, data quality is compromised, as differing standards of analysis are introduced, and outcomes reported in contrasting formats. The combination of identifiable data sources is also a current challenge, as the quality and quantity of information can vary in relation to synthetic methods, catalytic testing, and characterisation; and these data may be embedded in images, making collecting accurate data a challenge. |
65e0f5d466c1381729e12e1a | 162 | Here, greater efforts have been made to creating standardised, complete, and publicly available datasets, yet the realisations often remain limited to subsets of catalysts/reactants/products (e.g. oxygen evolution reaction electrocatalysts ) and a current challenge is to expand knowledge space. More pertinent is the need for accurate computational data that can be confidently correlated with experiment. |
65e0f5d466c1381729e12e1a | 163 | Considering machine-learning forcefields (MLFFs), which are a notable success from the application of data-driven approaches in materials modelling, a current challenge is to build these approaches to reproduce experiment, and not just higher-level computational models. Further extension of the MLFFs should then include multiple compositional and environmental aspects of a fully operational heterogeneous catalyst; and for this more efficient modelling paradigms are needed to create bigger datasets. Future challenges then arise with the integration of computational and experimental datasets, whereby parameters and observables from each respective domain must be collated and compared on an equal footing to provide value to the researchers of the future. |
65e0f5d466c1381729e12e1a | 164 | There are multiple technological advances identifiable to meet the challenges and fully achieve the potential of data-driven approaches. Within the laboratory, greater accessibility of automated highthroughput facilities, capable of synthesising, testing, and characterising catalysts, will be powerful in facilitating on-the-fly data-driven catalyst discovery, and must be coupled with public accessibility in centralised repositories to achieve larger, consistent, and more complete datasets. For modelling, improved software models are still needed to simulate a more accurate description of complete catalytic conditions, including the effects of temperature, pressure, and solvents, to provide accurate surrogate models of energy landscapes that can be explored rapidly, with automated discovery again an opportunity. And at the interface of computation and experiment, greater integration of catalytic datasets to provide holistic coverage is necessary to account for deficits in knowledge from either the experimental or computational domains alone; indeed, one needs to harness the individual strengths of "top-down" and "bottom-up" perspectives to derive complementary data, rather than distinct. |
65e0f5d466c1381729e12e1a | 165 | These scientific and technological advances are coupled also with a need for greater discussion between members of the catalytic community, and advocacy of standardisation. Whilst the principles of findable, accessible, interoperable, and reusable (FAIR) data have developed strong roots in the computational modelling domain, the distribution or centralisation of experimental data remains limited, and focused on positive results. The value of all data should be championed, and the importance of metadata to aid users in understanding value and limitations of a given dataset; deposition of results in an accessible resource should be encouraged, especially for experiment, where uptake is more urgently needed. The communication between researchers should include experimental and computational communities, and span academia, industry, and third-party organisations, at all levels of scientific investigation, in order to deliver better understanding of data needs and standardisation of data-collection procedures. The work here is implicitly multidiscipline, and so the interaction of chemists, materials scientists, physicists, computer scientists, data scientists and other domain experts should be encouraged to maximise the opportunity for multi-fidelity models that address shortcomings arising in individual research domains. Finally, there is the need to train and distribute knowledge among researchers of the value of their data; we should be educating in a cross-disciplinary manner about the importance of detailed digital data collection, in both experiment and computation. Such action will lead to engagement and investment towards necessary tools to accelerate the big-data driven discovery in heterogeneous catalysis; such software capabilities already exist, driven by the explosion in interest towards data-driven discovery, but the potential is yet to be realised. |
65e0f5d466c1381729e12e1a | 166 | The status for data-driven approaches in heterogeneous catalysis is promising, with strong application in computational fields and increasing demonstrations of potential in experimental laboratories. However, challenges remain with respect to ensuring the quality and completeness of individual datasets, as well as improving accessibility and standardisation. Opportunities have been highlighted that include increased automation within research environments, improved cross-discipline communication, and efforts among users to reach distribution standards that will benefit emergent as well as established researchers. |
65e0f5d466c1381729e12e1a | 167 | Catalysis is an extremely challenging but valuable field, with impact on all of humanity. Adoption of the outlined approaches can facilitate the update of emergent data-driven methods for a transition to cleaner, more active heterogeneous catalysts that benefit the global population. There are many examples of good practice, but efforts are still needed if we are to maximise the potential value for all. |
65e0f5d466c1381729e12e1a | 168 | 1 Max Planck Institute of Colloids and Interfaces, Potsdam, Germany Status X-ray scattering and diffraction pertain to a major set of techniques to characterize the structure of materials at the nanoscale. Small-angle x-ray scattering (SAXS), in particular, has been developed in the 1950s to resolve structures in the size range 1 -100 nanometers . Despite the development of electron microscopes some years later, it remained an important technique, mostly because x-rays are less strongly absorbed than electrons, which allows for in-operando experiments, studying the effect of physical stimuli, such as temperature, pH or humidity on material structure. A strong boost in the use of smallangle scattering came with the availability of synchrotron radiation that improved the time resolution of in-operando experiments, but also opened to possibility to transform SAXS into a multiscale imaging tool. In this approach, the general idea is that nanoscale information is extracted from analyzing the scattering patterns, while mapping of the specimens provides the information at the microscale (see Fig. ). The first attempts with SAXS-based imaging go back to the 1990s . This evolved until the development of SAXS tomography which yields six-dimensional data: three dimensions in real space through scanning and rotating the specimen (typically with micrometer resolution) , as well as three additional dimensions from the scattering patterns within each voxel (containing nanoscale information) . |
65e0f5d466c1381729e12e1a | 169 | Fig. : Principle of scanning-SAXS imaging. The specimen (for example a tooth section) is scanned across the x-ray beam with a diameter between tens of nanometers and several micrometers. Parameters extracted from the scattering patterns can then be mapped with a resolution corresponding to the x-ray beam diameter. In the figure, this is the thickness of mineral particles in dentin (the star indicates an area with a caries lesion). Picture adapted from . |
65e0f5d466c1381729e12e1a | 170 | These advances upstream of the specimen in the experiment, however, lead to new challenges downstream of the specimen, linked to the treatment and the evaluation of massive amounts of data. A schematic of the workflow in a SAXS measurement is shown in Fig. . The traditional way of conducting such an experiment would be the path symbolized by (A) and (B) in this figure . (A) represents specimen preparation and the experiment planning and (B) the data collection. These data would then be brought back from the synchrotron experiment for treatment and analysis. However, with the increased speed of data collection, a general challenge in this approach resides in the fact that the experimentalist is essentially blind without some capabilities of data diagnostics. This requires elementary pre-analysis of the data to see whether a modification of the beamline setup could improve the experiment. Recognizing this, software packages involving fast data diagnostics were developed, an example being DPDAK, an open code software introduced at the BESSY and the DESY synchrotrons (in Berlin and Hamburg, respectively) . beginning to improve the quality and speed of the experiment: C is a readjustment of the experiment setup based on rapid data diagnostics, D is data reduction and denoising, E is data analysis and F automatic material synthesis based on the measurement results. |
65e0f5d466c1381729e12e1a | 171 | With the amount of data collected in each beamtime session increasing continuously over the years, a number of additional challenges appear from the fact that manual data treatment becomes impossible. This applies to the cleaning of data (such as denoising, background subtraction, image reconstruction, normalization, etc.) and even more to the data analysis, which in SAXS often involves data fitting. These steps are indicated by the arrows (D) and (E) in Fig. . |
65e0f5d466c1381729e12e1a | 172 | Especially in SAXS tomography experiments, radiation damage should not be underestimated., since every specimen position will be hit several times by an intensive x-ray beam due to the required rotation of the specimen around multiple axes . A typical strategy is then to reduce the irradiation time, which inevitably increases the noise in the data. To avoid problems with this noise in the 6D data reconstruction after the measurements, Zhou and coworkers propose a machine learning (ML) for the denoising of scattering data . This approach facilitates step (D) in the diagram of Fig. . |
65e0f5d466c1381729e12e1a | 173 | The reconstruction of SAXS tomography data is equally challenging due to their high dimensionality. A possible traditional approach consists in calculating invariants of the SAXS data before reconstruction, which replaces the three-dimensional SAXS data by scalars that can be reconstructed much more efficiently . SAXS invariants are useful, since they contain information about volume and surface of nano-size objects in the specimen and allow, for example, the calculation of particle sizes in bone or dentin . In the last few years, ML approaches are being developed for tomographic data reconstruction. Omori and coworkers review these developments for tomography using SAXS but also xray diffraction and other modalities . While these advances relate to step (D) in Fig. , the review also addresses ML approaches for segmentation and analysis of the reconstructed data (step (E) in Fig. ). |
65e0f5d466c1381729e12e1a | 174 | Once data are reconstructed, every voxel in SAXS tomography data contains a scattering pattern to be analyzed. This means a massive effort for data analysis (Step (E) in Fig. ) after reconstruction. Similar numbers of SAXS patterns need to be analyzed in other situations, for example when material structures are studied as function of physical parameters (temperature, pressure, pH, humidity, etc.) in multiple measurements. A recent review by Anker and coworkers addresses machine learning (ML) approaches to analyze a range of synchrotron-based experiment data, including SAXS but also powder diffraction, pair distribution function, inelastic neutron scattering and X-ray absorption spectroscopy data. While the traditional approach would be to fit a physical model to the data, supervised ML can be used to train a model for the prediction of structure based on data, but also to predict the scattering data based on a known structure and also to predict parameters based on some physical understanding of the system . In another recent work , a ML-based analysis of SAXS data is proposed, which is based on Gaussian random fields that avoids the common model fitting of the data. |
65e0f5d466c1381729e12e1a | 175 | The approaches discussed until now are improving workflows in nearly all steps of SAXS experimentation (step (C) to (E) in Fig. ). A last step (F) potentially closes the loop towards a fully automatized experimentation. This challenge is currently been taken up under the label of Autonomous Experimentation. Beaucage and Martin report on the development of an open liquid handling platform for autonomous formulation and x-ray scattering . Yager and coworkers review this new paradigm and show how autonomous x-ray scattering can enhance efficiency, and help discover new materials . Concluding Remarks Small-angle x-ray scattering is an old method that is currently seeing an enormous increase in activity due to highly brilliant x-ray sources, more performant x-ray optics and -most recently -rapid progress in the treatment and the analysis of large amounts of data. Machine learning approaches play an important role in this development that has really just begun. |
669c1a8e01103d79c5b4fa60 | 0 | It is often needed in computational chemistry to compute free energy differences among multiple thermodynamic states. A common approach for this task is to build a perturbation graph connecting the states, and compute free energy differences on all edges of the graph. The free energy difference between any two states can then be obtained by integrating along a path connecting the two states. When the perturbation graph has cycles, there will be multiple paths connecting two states. Because free energy is a function of states, the free energy difference between any two states should be independent of the path chosen. Equivalently, the free energy around any cycle is zero, which we refer to as the cycle consistency condition. The cycle consistency condition, on the one hand, could help diagnose potential systematic problems in the calculation. For example, if the free energy around a cycle is significantly different from zero in practical calculations, it indicates that there might be systematic issues in sampling, such as the system being trapped in a meta-stable state. On the other hand, if there are no systematic sampling issues, the cycle consistency condition can be used to improve the accuracy of free energy estimates. |
669c1a8e01103d79c5b4fa60 | 1 | Several methods have been proposed to use the cycle consistency condition to improve the accuracy of free energy estimates. They could be classified into two types. The first type computes the free energies on edges of a cycle independently and then adjusts them to satisfy the cycle consistency condition. Such methods include the cycle closure correction (CCC) method and its weighted variant. The second type couples the calculations of free energies on edges of a cycle and estimate them simultaneously. An example method of this type is the MBARnet 11 method, which computes free energies on edges of a cycle by optimizing an objective function subject to the cycle consistency condition. Although there is little evidence suggesting which type of methods is more accurate, the second type can potentially make better use of the cycle consistency condition because it uses the condition throughout the calculation. |
669c1a8e01103d79c5b4fa60 | 2 | The key to the second type of method lies in effectively coupling the calculations of free energies along the edges of a cycle using the cycle consistency condition. Intuitively, this coupling should not significantly alter the free energy estimates on edges where the estimates are already highly precise. Instead, it should leverage these high-precision edges and the cycle consistency condition to refine the estimates on edges with lower precision. Here, we propose a probabilistic method that accomplishes this coupling, which we call the coupled Bayesian multistate Bennett acceptance ratio (CBayesMBAR) method. We applied CBayesMBAR to compute free energy differences among four harmonic oscillators and relative protein-ligand binding free energies. In both examples, CBayesMBAR provides more accurate results than those obtained without considering the cycle consistency condition. Furthermore, CBayesM-BAR outperformed the widely used cycle closure correction method that also uses the cycle consistency condition. We have implemented CBayesMBAR as part of the BayesMBAR package, which is freely available at |
669c1a8e01103d79c5b4fa60 | 3 | CBayesMBAR is built upon the Bayesian multistate Bennett acceptance ratio (BayesMBAR) method, a Bayesian generalization of the multistate Bennett acceptance ratio (MBAR) method. In BayesMBAR, we formulate free energy estimation as a Bayesian inference problem and derive a posterior distribution of free energy differences given sampled configurations. This posterior distribution is then used to estimate free energy differences and their uncertainties. The Bayesian framework of BayesMBAR naturally facilitates the coupling of multiple BayesMBAR calculations on a perturbation graph with cycles. Here, we first briefly review the BayesMBAR method and then explain how to couple multiple BayesMBAR calculations in CBayesMBAR using the cycle consistency condition. |
669c1a8e01103d79c5b4fa60 | 4 | Let u i (x), with i ranging from 1 to m, be the reduced potential energy functions 13 of m states. We aim to compute the free energy differences among these states by sampling configurations from their Boltzmann distributions. For the i-th state, we use {x ik , k = 1, . . . , n i } to represent the n i configurations sampled from its Boltzmann distribution and we assume that these configurations are uncorrelated. We use F = (F 1 , . . . , F m ) to denote the free energies of the m states. In BayesMBAR, we introduce y ik as the index of the state from which configuration x ik is sampled. Therefore, y ik is equal to i. Although the indices of states for sampled configurations are determined during sampling, they are treated as random variables in BayesMBAR. Specifically, y ik is viewed as a sample from a categorical distribution with parameters π = (n 1 /n, . . . , n m /n), meaning that the probability of sampling a configuration from the i-th state is p(y = i) = π i = n i /n, where n = m i=1 n i is the total number of configurations. The concatenation of state indices and configurations, denoted as (y, x), is viewed as samples from the conditional distribution p(y, x|F ), defined as |
669c1a8e01103d79c5b4fa60 | 5 | for i ∈ {1, . . . , m}. The free energy F , which was traditionally treated as a parameter in MBAR, is treated as a random variable in BayesMBAR and we assign a prior distribution p(F ) to it. Given the prior distribution p(F ) and the conditional distribution p(y, x|F ) described above, the joint distribution of (F, Y, X) is |
669c1a8e01103d79c5b4fa60 | 6 | BayesMBAR uses the posterior distribution (Eq. 3) to estimate the free energies and their uncertainties. Specifically, it estimates the free energy with either the mode or the mean of the posterior distribution, and computes the uncertainty of the estimate using the standard deviation of the posterior distribution. When the prior distribution p(F ) is chosen to be a uniform distribution, the mode of the posterior distribution is identical to the MBAR estimate. Our previous work has shown that BayesMBAR provides more accurate uncertainty estimates than the asymptotic analysis used in MBAR, especially when the number of configurations is small. 12 Futhermore, as a Bayesian method, BayesMBAR provides a principled way to incorporate prior information to improve the accuracy of free energy estimates. 12 |
669c1a8e01103d79c5b4fa60 | 7 | The Bayesian framework used in BayesMBAR enables the coupling of multiple BayesMBAR calculations on a perturbation graph with cycles. For example, consider three thermodynamic states, A, B, and C. To compute their free energy differences, we construct a perturbation graph connecting these states (Fig. ). In this graph, red, orange, and blue circles represent the three end states, and arrows indicate the paths connecting them. Small black circles along the paths represent intermediate states between the end states. Configurations are sampled from both the end states and the intermediate states along each path. We use X h to denote the configurations sampled from states on the h-th path, where h ranges from 1 to 3, and Y h to denote the state indices of X h . In other words, X h and Y h correspond to X and Y in Eq. 2 and 3. We use F h to denote the free energy of states on the h-th path, expressed as |
669c1a8e01103d79c5b4fa60 | 8 | , where F h 1 and F h -1 are free energies of the two end states. In CBayesMBAR, we estimate all free energies of F = (F 1 , F 2 , F 3 ) simultaneously using a Bayesian probabilistic framework. Specifically, we assign a prior distribution p(F ) to F and view all configurations and state indices ({Y h , X h } 3 h=1 ) as samples from the conditional |
669c1a8e01103d79c5b4fa60 | 9 | Our perturbation graph (Fig. ) has paths connecting all state pairs, with three intermediate states introduced along each path. These intermediate states are also modeled as 2-D harmonic oscillators, with force constants and equilibrium positions linearly interpolated between the end states. We sample n configurations from each state on every path, varying n from 10 to 5000. Using these configurations, we estimate free energy differences for all paths using three methods: BayesMBAR, cycle closure correction, and CBayesMBAR. |
669c1a8e01103d79c5b4fa60 | 10 | For BayesMBAR, we independently compute free energy differences for each path, using the posterior mode as the free energy estimate and the standard deviation of the posterior distribution (estimated from 1000 samples) as the uncertainty. For CCC, we adjust the free energy differences obtained with BayesMBAR to satisfy the cycle consistency condition (details are in the Supporting Information). For CBayesMBAR, we simultaneously estimate free energy differences across all paths, coupling the calculations using the cycle consistency condition. As with BayesMBAR, we use the posterior mode for the free energy estimate and the standard deviation of the posterior distribution (estimated from 1000 samples) for uncertainty quantification. |
669c1a8e01103d79c5b4fa60 | 11 | Here, ∆F h exact represents the exact free energy difference between the end states of the h-th path, and F h -1 -F h 1 is the estimated value. To ensure statistical robustness, we conduct 100 repetitions of the calculations, compute the mean RMSE over these repetitions, and perform paired t-tests to compare the three methods using the RMSEs obtained (Table and Figures ). Across all sample sizes n, CBayesMBAR has significantly smaller RMSEs compared to BayesMBAR and CCC, with p-values less than 10 -4 for all cases. Next, we assess the performance of the three methods in computing the free energy difference between the end states of individual paths using the mean absolute error (MAE) over the 100 repetitions as the metric (Table and Table ). In all cases, CBayesMBAR has the smallest MAEs among the three methods. |
669c1a8e01103d79c5b4fa60 | 12 | Both BayesMBAR and CBayesMBAR quantify uncertainties in free energy estimates using the standard deviation of the posterior distribution. Table presents the mean estimated uncertainties for free energy differences between the end states of individual paths. Consistently, CBayesMBAR yields smaller uncertainties compared to BayesMBAR, highlighting how coupling free energy estimates via cycle consistency conditions enhances estimation precision. The performance improvement of CBayesMBAR over BayesMBAR in estimating free energy differences along individual paths (Table ) correlates closely with the uncertainties associated with these estimates (Table ). For instance, paths with smaller uncertainties (e.g., A → B) demonstrate modest improvements in MAE, whereas paths with larger uncertainties (e.g., B → D) have more substantial improvements. This observation aligns with the theoretical expectation that cycle-based coupling primarily benefits estimates with lower precision by leveraging high-precision paths and the cycle consistency condition. Next, we apply CBayesMBAR to compute the relative binding free energies of multiple ligands to a given protein. This type of calculation typically employs an alchemical per-turbation graph connecting the ligands. Free energy differences along the graph's edges are computed with alchemical methods. In our study, we apply CBayesMBAR to calculate the relative binding free energies of 6 ligands to the tyrosine kinase 2 (Tyk2) protein (Protein Data Bank ID: 4GIH ), a system widely used as a benchmark in the field. The 6 ligands share a common scaffold structure, with the R group representing the chemical group that differs among them (Fig. ). We construct an alchemical perturbation graph with multiple cycles to connect the ligands (Fig. ). |
669c1a8e01103d79c5b4fa60 | 13 | where q i and q j are the charges of the two particles. For an edge from ligand A to ligand B, we employ 13 intermediate alchemical states. These states progressively deactivate the nonbonded interactions involving ligand A's R group with the system, while simultaneously activating those involving ligand B's R group with the system. To prevent singularities, the electrostatic interactions between a ligand's R group and the system are only present when the corresponding Leonard-Jones interactions are fully turned on. The specific values of the alchemical variables for states along the path from ligand A to ligand B are detailed in Table , with states 1 and 15 corresponding to ligands A and B, respectively, and states 2 to 14 representing intermediates. For our molecular dynamics simulations, we use the Amber ff14SB force field for the protein, the general Amber force field for ligands, and the TIP3P model for water. Both water and protein phase simulations are conducted in a periodic water box. We calculate electrostatic interactions using the particle mesh Ewald method, while Leonard-Jones interactions are smoothly truncated at 10 Å using a switching function that starts at 9 Å. |
669c1a8e01103d79c5b4fa60 | 14 | We sample configurations from each state by running molecular dynamics simulations with OpenMM. These simulations are performed at 298.15 K using the Langevin integrator with a 2 fs time step and a 1 ps -1 friction coefficient. We maintain a pressure of 1 atm using a Monte Carlo barostat with Monte Carlo moves attempted every 25 steps. Each state is simulated for 5000 ps, with configurations saved every 2 ps. |
669c1a8e01103d79c5b4fa60 | 15 | We compute the alchemical free energy differences among the ligands using BayesMBAR, CCC, and CBayesMBAR with configurations sampled over simulation times ranging from 20 ps to 5000 ps. To ensure statistically meaningful comparisons, we repeat the calculations, including the molecular dynamics simulations, 10 times. The performance of the three methods were evaluated using the same metrics as in the harmonic oscillator example. Since the exact alchemical free energy differences are unknown, the reference free energy differences are calculated using all configurations sampled from all 10 repeats of the molecular dynamics simulations. The mean RMSEs of the free energy differences between end states of all paths are shown in Table . For both the water phase and the protein phase, CBayesMBAR has significantly smaller RMSEs than BayesMBAR, with p-values less than 0.05 for all cases (Fig. and). Compared to CCC, the RMSE of CBayesMBAR is either significantly (p-value ≤ 0.05) smaller or statistically indistinguishable from that of CCC (Fig. and). We also compare the three methods using the MAEs of their free energy estimates on individual paths. The results are shown in Table and S3 for the water phase and Table and S4 for the protein phase. Although CBayesMBAR and CCC have reduced RMSEs aggregated over all paths compared to BayesMBAR (Table ), their improvement over BayesMBAR on individual paths is not as consistent as in the harmonic oscillator example. CBayesMBAR and CCC have smaller MAEs than BayesMBAR for some paths, while for other paths, they have larger MAEs. This indicates that incorporating the cycle consistency condition does not uniformly enhance the accuracy of free energy estimates on all paths, although it does improve the overall accuracy. |
669c1a8e01103d79c5b4fa60 | 16 | Mean estimated uncertainties of free energy estimates on individual paths are presented in Table and S6 for the water and protein phases, respectively. Similar to the harmonic oscillator example, CBayesMBAR demonstrates smaller uncertainties than BayesMBAR, underscoring improved precision in free energy estimates by leveraging the cycle consistency condition. Comparing the uncertainties and the MAEs of the free energy estimates on individual paths, we observe that whether the cycle consistency condition improves the accuracy of free energy estimates on a path depends on the initial precision of the estimate on the path relative to that of other paths in the cycle. For paths with low precision, CBayesMBAR and CCC significantly improve the accuracy of the estimates, while for paths with high precision, CBayesMBAR and CCC could perform worse than BayesMBAR. This phenomenon occurs because coupling allows noise from low-precision paths to pass into higher precision paths, thereby slightly reducing the accuracy of the estimates on the Notably, for paths where CBayesMBAR and CCC perform worse than BayesMBAR, CBayesMBAR tends to have smaller MAEs than CCC. This suggests that the cycle consistency condition is more effectively utilized in CBayesMBAR than in CCC to improve the accuracy of free energy estimates on individual paths. In this work, we introduce CBayesMBAR, a new method for computing free energy differences on perturbation graphs with cycles. As a Bayesian approach, CBayesMBAR integrates the cycle consistency condition to couple free energy estimates across all edges of a perturbation graph in a principled manner. It incorporates the cycle consistency condition into the prior distribution of free energies and combines this with sampled configurations to form the posterior distribution, which is then used to estimate the free energies and their uncertainties. |
669c1a8e01103d79c5b4fa60 | 17 | This ensures that free energy estimates directly satisfy the cycle consistency condition. Uncertainty estimates, derived from the standard deviation of the posterior distribution, reflect both the sampled configurations and the cycle consistency condition. Through two example applications, we demonstrate that CBayesMBAR significantly improves the accuracy of free energy estimates on perturbation graphs with cycles, outperforming both BayesMBAR and CCC. The computational cost of CBayesMBAR is comparable to that of multiple independent BayesMBAR calculations and is negligible compared to the cost of molecular dynamics simulations. For example, when computing relative protein-ligand binding free energies, CBayesMBAR takes about one minute to compute free energy differences and their uncertainties for all edges in the alchemical perturbation graph, using 2500 configurations from each state and running on a single RTX A5000 graphic processing unit. |
645def34fb40f6b3ee74071c | 0 | Bipolar-membrane electrodialysis (BPM-ED) carbon capture uses renewable electricity to drive the capture and release of carbon dioxide (CO2), making it a potentially important tool in the fight against climate change. This process employs BPMs to generate pH-swings for CO2 capture and has been demonstrated at the lab scale at current densities exceeding 100 mA cm -2 . However, the energy required to drive BPM-ED (>300 kJ mol -1 ) is prohibitive for industrial deployment. This study optimizes BPM-ED using a continuum modeling approach validated by experiment, which quantifies the energy intensity of BPM-ED and resolves the transport and dynamic equilibrium of reactive carbon species in BPMs. Applied-voltage-breakdown analysis identifies the dominant energy losses and elucidates BPM properties that improve the efficiency of BPM-ED. The model reveals that mitigation of generated CO2 bubbles and acceleration of water-dissociation catalysis could reduce the energy intensity to <80 kJ mol -1 at 100 mA cm -2 , demonstrating potential for achieving high rates with substantially reduced energy penalties. This study provides insights into the physics of BPMs immersed in reactive carbon solutions and contributes towards the development of BPMs for electrochemical CO2 reduction, cement production, and other emerging electrochemically decarbonized applications. |
645def34fb40f6b3ee74071c | 1 | Carbon-dioxide (CO2) emissions account for 76% of total greenhouse-gas emissions and are currently 50% higher than pre-industrial levels. As a result, temperatures are expected to rise at least 2°C unless CO2 is captured and removed from the atmosphere. Traditional direct-aircapture technologies (DAC), use alkaline aqueous sorbents (e.g., KOH(aq)) to capture ambient CO2 as mixtures of (bi)carbonates, which are thermally-decomposed into a pure CO2 gas stream for permanent removal (Figure ). Unfortunately, this technology is energy intensive and expensive, largely because of the significant thermal energy penalty (>150 kJ mol -1 ) required to regenerate the sorbent after it has captured CO2 (Figure ). This thermal energy is often provided by burning natural gas, which results in CO2 emissions that reduce the net amount of CO2 removed from the atmosphere by DAC. Electrochemically-mediated carbon capture (EMCC) can address the challenges associated with thermal sorbent regeneration by using low-cost renewable electricity to trigger the capture and release of CO2 from the sorbent. The low-temperature nature of EMCC also circumvents the thermal efficiency limits associated with desorbing CO2 at high-temperature in fired reboilers and calciners. Many chemistries have been explored for EMCC, such as nucleophilic sorbents that can absorb or desorb CO2 when oxidized or reduced at an electrode surface and amine-based sorbents that capture CO2 homogeneously and release that CO2 upon the reaction with cupric (Cu 2+ ) ions generated via electrochemical redox. However, these specific EMCC technologies have not been demonstrated at current densities beyond 5 mA cm -2 , necessitating large, costly reactors or long residence times. Bipolar-membrane electrodialysis (BPM-ED) is a promising EMCC technology that uses water dissociation to mediate CO2 capture and sorbent regeneration at relevant current densities (i.e., >100 mA cm -2 ; Figure ). At the heart of this technology is a BPM, which consists of anion-and cation-exchange layers (AEL and CEL) that are laminated together often with a catalyst layer (CL) at the interface. Under a reverse bias, the strong electric field formed at the interface of the oppositely-charged AEL and CEL layers drives water dissociation into acid (H + ) and base (OH -) (Eq. ( )). The acid produced in the CEL is used to shift the bicarbonate and carbonate equilibrium (Eq. ( )-( )) towards CO2, and the base produced in the AEL is used to regenerate the alkalinity of the carbonate sorbent. A key advantage of BPM-ED is the use of water as the reactant, which enables higher current densities than other EMCC technologies because of the high concentration of water (55 M) in aqueous CO2 capture solutions. Notwithstanding, the energy intensity of BPM-ED typically exceeds 300 kJ mol -1 because of the water dissociation overpotential and ohmic resistances in the BPM. Further optimization is therefore required for BPM-ED to become more efficient than thermal CO2 sorbent regeneration. BPM-ED is also capable of removing CO2 from seawater at current densities exceeding 50 mA cm -2 (Figure ), i.e., direct ocean capture (DOC). By using seawater as the sorbent, instead of a concentrated bicarbonate solution, the natural process of CO2 absorption and conversion into bicarbonates is leverged and obviates the need for a Capex-intensive air contactor. In the DOC configuration, the OH -transported from the AEL can be reacted with dissolved bicarbonates in the seawater to form a mineralized carbonate stream that can be precipitated and sequestered, or fed back into the ocean to help reverse the effects of ocean acidification. CO2 that is formed by BPM-ED can be collected outside the device for conversion or storage elsewhere, or directly transported to a CO2 reduction catalyst within the same device for direct conversion. The challenge with BPM-ED is that the energy intensities reported in the literature (300 to 1000 kJ mol -1 CO2) are significantly higher than the minimum thermodynamic energy required to separate CO2 from air (20 kJ mol -1 ). Unfortunately, while the mechanism of in-situ CO2 capture and sorbent regeneration via BPM-ED has been well established experimentally, very few theoretical studies have simulated BPMs immersed in carbon-containing solutions to resolve the dominant energy losses. Sabatino et al. developed process-level simulations to show the critical role BPM performance has on the energy intensity and overall technoeconomic feasibility of BPM-ED EMCC. However, the simulations treated the BPM as a black-box, with no information provided regarding ionic transport, water dissociation kinetics, or the material properties of the BPM that would be required to achieve the necessary performance enhancements. Lees et al. and Kas et al. have helped to resolve the mechanism of in-situ CO2 generation in BPM-based electrolyzers that convert (bi)carbonate capture solutions to carbon monoxide, or methane. However, these models only considered the CEL of the BPM, ignoring the AEL and the water dissociation CL where protons and hydroxide are generated. This simplification renders the models incapable of predicting the primary factors influencing the energy intensity of BPM-ED: carbon crossover, water dissociation overpotential, and ohmic loss. There is also little precedent for modeling the transport of dilute carbon species in seawater feedstocks used for DOC. Models that relate the chemistry and material properties of BPMs to the energy intensity of BPM-ED are therefore warranted. In this work, we demonstrate a comprehensive model of BPM-ED based on our prior work modeling multi-component transport in BPMs, now with the homogeneous reaction kinetics of reactive carbon species (Eq. ( )-( )). The model is validated by comparison with experimental data for various carbon-containing solutions, and is used to elucidate the nature of in-situ CO2 generation and sorbent regeneration in BPMs employed for EMCC. Additionally, concentration profiles and fluxes of all carbon species, protons, and hydroxide anions are resolved such that performance tradeoffs between CO2 generation and carbon crossover can be explored. Lastly, sensitivity analysis provides improved understanding of ideal BPM and system properties to enhance CO2 capture efficiency, setting the stage for BPMs optimized for carbon capture processes. |
645def34fb40f6b3ee74071c | 2 | The BPM model employed here (described in detail in Section S1 of the SI) was designed to mimic the 4-probe experiments performed for direct BPM analysis (described in detail in Section S2.2), modeling the relevant domain as a 1-dimensional (1D) continuum consisting of an 80 µm CEL, and an 80 µm AEL sandwiching a 3.5 nm water dissociation CL, with a 25 µm diffusion boundary layer (Section S1.6) on either side of the BPM assembly (Figure ). Modeling the domain as such enables the capturing of all relevant concentration profiles and species fluxes. |
645def34fb40f6b3ee74071c | 3 | Importantly, the model also explicitly considers the generation and consumption of species via the homogeneous buffer reactions shown below. The effects of forced convection are implicitly considered as dictating the diffusion boundary-layer thickness, but convective effects are not explicitly considered within the modeled domain, because the electrolyte should be quiescent within the diffusion boundary layer. |
645def34fb40f6b3ee74071c | 4 | The simulated membrane potential across the 1D domain is equivalent to the measured membrane potential across the two reference electrodes in the 4-probe experiment. The choice of a 4-probe measurement (and simulation) is critical because it enables the decoupling of the potential losses that occur at the anode and cathode from those that result from and drive transport and kinetics in the BPM. For the purposes of the simulations and energy intensity calculations, we defined the energy intensity of the BPM-ED processes as the membrane potential in the 4-probe experimental cell since the energetics of the terminal electrodes are not directly correlated with the rate of CO2 efflux or sorbent regeneration in the BPM. Moreover, the terminal electrodes are often used to perform reactions that produce value-added streams in EMCC processes. Terminal electrodes were considered in our BPM-ED stack analysis (Section S24) |
645def34fb40f6b3ee74071c | 5 | to compare the associated energy penalty to the overall cell voltage as the number of BPMs in the stack is increased. to form H + and OH -. The H + react with HCO3 -in the catholyte to form CO2. Once the in situ generated CO2 has exceeded its solubility limit in the catholyte, it exits the aqueous phase in the form of bubbles. OH -from water dissociation reacts with HCO3 -absorbed in the AEL to form CO3 2-that transports back into the anolyte. Spectator co-ions from the anolyte or catholyte can cross through the membrane to reduce the developed pH gradient. |
645def34fb40f6b3ee74071c | 6 | where Φ is the electrostatic potential within the electrolyte and membrane phases and 𝑅, 𝑇, and 𝐹 are the ideal gas constant, the temperature, and Faraday's constant, respectively. In the above expression, the first term is the reference chemical potential of species i, the second term accounts for changes in activity of i, the third term accounts for electrostatic potential and only applies for charged ionic species (i.e., all species except CO2). The activity (𝑎 . ) of a given species is defined by |
645def34fb40f6b3ee74071c | 7 | where 𝛽, 𝜏, and 𝜎 are lumped parameters discussed in more detail in Section S1.3. The dependence on E in equation ( ) is approximately exponential, as shown in previous work. The above dependence of the activity coefficient on the electric field ensures that macroscopic equilibrium is upheld (e.g., 𝜇 0 & + 𝜇 10 % = 𝜇 0 ! 1 for water dissociation or 𝜇 0 & + 𝜇 >1 ' !% = 𝜇 0>1 ' % for bicarbonate dissociation) (see Section S1.1 for consistency of macroscopic equilibrium), and results in the equilibrium constants of net-charge-generating reactions to be 𝐾 M6",!,+ (𝐸) = 𝐾 M (𝐸 = 0)𝑓(𝐸) (10) where 𝐾 M6",!,+ (𝐸) is the equilibrium constant, which is a function of local electric field, and 𝐾 M (𝐸 = 0) is the value of the equilibrium constant for no applied electric field. For neutral species (i.e., CO2) and ions that do not participate in buffer reactions (i.e., K + ), the activity coefficient is unaffected by the electric field. We note that ion-specific interactions likely impact ionic transport and concentrations in the BPM due to the high concentrations (> 1 M) in the BPM. However, these are likely second-order effects and mixed-interaction parameters are not available for the broad variety of ions in the polymer membranes studied herein. |
645def34fb40f6b3ee74071c | 8 | To model the water dissociation catalyst, we treat the CL as a thin neutral region located between the AEL and CEL where the large field in between the two layers drives the dissociation of water by shifting the forward and reverse rate constants as per equations ( ) and (13). While more detailed models of water dissociation catalysis have been presented that explicitly consider catalyst surface effects, we choose a more simple model for water dissociation for computational efficiency. Furthermore, the interaction of carbon species with catalyst surfaces are still not well understood; because this work is more focused on modeling the impact of reactive carbon species on polarization behavior and mesoscale transport, the exact details of the catalyst surface are not crucial to capture. More details on the choice of model for the CL can be found in Section S1.4. |
645def34fb40f6b3ee74071c | 9 | The diffusivities, permittivities, and transport properties are calculated as per previous work. The double-layer thickness is simulated by solution of the Wien-effect modified PNP and results from an assumed neutralization thickness of 3.5 nm at the AEL|CEL interface. Discussion of the double layer simulation at the AEL|CEL interface can be found in prior work. Details on how the diffusivities and transport properties in the BPM are calculated can be found in Section S1.5. |
645def34fb40f6b3ee74071c | 10 | Because the model immediately removes bubble CO2 from the domain and does not track the bubble phase explicitly, to determine the gas bubble flux out of the domain, the model is run with and without the flux term shown in equation (17), and the flux of bubbles out of the domain |
645def34fb40f6b3ee74071c | 11 | In accordance with prior work examining the effect of gas bubbling on electrochemical systems, the CO2 bubbles, which are quite substantial (see Supporting Video S1), accumulate on the surface and reduce the electrochemically active surface area and consequently the current density that can be achieved. The bubble coverage (𝜃 >1 ! ,Z[ZZ\3 ) is related to the bubble efflux and can be calculated by 55 |
645def34fb40f6b3ee74071c | 12 | where the values of 0.024 and 0.55 in the above expression are fitted parameters that are consistent with prior implementations of the above model for bubble coverage. The coverage term (𝜃 >1 ! ,Z[ZZ\3 ) calculated using this semi-empirical model represents a site-blocking factor equal to the fraction of the areal cross section impeded by bubbles and thus inaccessible for ionic transport. |
645def34fb40f6b3ee74071c | 13 | where the origin (x = 0) is defined at the center of the CL, 𝐿 >W is the CL thickness, 𝐿 dDW is the AEL thickness, and 𝐿 `NW is the anolyte boundary-layer thickness. The thickness of both the catholyte and anolyte boundary layers are assumed to be equal to 25 µm, and are estimated by assuming approximately Fickian behavior of the CO2 at the onset of bubbling (Section S1.6). |
645def34fb40f6b3ee74071c | 14 | The governing equations representing this model were solved using two coupled General Partial Differential Equation (g) Modules in COMSOL Multiphysics 5.6. The modeling domain was discretized with a nonuniform mesh with heavy refinement near all interfaces (membranemembrane, membrane-electrolyte, and membrane-CL) as well as within the CEL and catholyte where in-situ CO2 generation occurs. The resulting mesh comprised 6000-18000 elements depending on the applied current density. A mesh independence study was performed, and the results were found to be independent of meshing for those meshes. Critically, to achieve initial convergence, the Donnan equilibria were solved analytically to obtain species concentrations in each of the membrane layers at 0 applied membrane potential and fed to the simulation as initial conditions using hyperbolic tangent analytic functions. The simulations in the present study were solved using the Multifrontal Massively Parallel sparse direct Solver (MUMPS) using Newton's Method with a tolerance of 0.001 and a recovery damping factor of 0.75. For current densities where bubbling occurs, due to numerical instability, the tolerance was increased to 0.008 and the recovery damping factor reduced to 0.35. |
645def34fb40f6b3ee74071c | 15 | Once the electrodialysis cell was assembled, a peristaltic pump (Ismatec ISM4408) was used to flow 1 M NaOH (10 mL min -1 ) through the outer chambers, 3 M NaCl (10 mL min -1 ) through the dilute chamber (chamber between CEM and AEM), and the relevant bicarbonate, carbonate, or simulated seawater solution (0.2 mL min -1 ) through the chambers on either side of the BPM. These flowrates remained constant through all measurements. Once all chambers were filled, leads from a SP-300 BioLogic potentiostat were connected to the cathode, anode, and reference electrodes in a four-point measurement configuration. Current density-voltage measurements were then obtained by applying a chosen current across the cathode and anode and measuring the voltage between the two Ag/AgCl reference electrodes. Oxygen evolution was performed at the anode and hydrogen evolution was performed at the cathode. These reactions were isolated from the BPM with the two monopolar membranes positioned on either side. Measurements were started at 0.1 mA cm -2 and increased stepwise through each current density to 100 mA cm -2 (EC-lab® software). Each current step was held constant for 20 minutes to obtain a steady-state voltage. |
645def34fb40f6b3ee74071c | 16 | The final voltage collected at each current step was reported in the current density-voltage plots, with the exception of some of the higher current density steps. During the 1 M KHCO3 and 0.5 M KHCO3 experiments, CO2 bubbles formed at the surface of the electrode at higher current densities (≥ 20 mA cm -2 ) causing a significant amount of noise in the data. For these measurements, the voltage reported was taken as the average over the current step. |
645def34fb40f6b3ee74071c | 17 | To understand the transport and reaction kinetics of reactive carbon species in BPMs, experimental polarization curves were taken in a 4-probe experimental cell for a BPM immersed in three electrolytes relevant to EMCC and DOC applications: 1 M KHCO3, 0.5 M KHCO3, and simulated seawater (0.00211 M NaHCO3 + 0.5 M NaCl). Simulations of the BPM under polarization in these varying electrolytes were run and the simulated polarization curves were compared to the experimentally measured polarization curves (Figure , markers). Strong agreement between theory and experiments is observed in all three carbon-containing electrolytes for a single set of fitting parameters (Table ). Remarkably, the simulation can even capture the incredibly non-intuitive polarization behavior occurring at current densities < 20 mA cm -2 (Figure ). For these current densities, there is an initial onset in current density at ~0.4 V of applied membrane potential for both the 0.5 M KHCO3 and 1 M KHCO3 BPMs. The current density of these BPMs increases approximately linearly until ~0.7 V when the current density takes off exponentially. Conversely, for the seawater BPM, the current density does not have an initial takeoff at 0.4 V, so between 0.4 and 0.8 V the seawater BPM drives less current density than the KHCO3 BPMs. However, past the second onset at ~0.7 V, the current generated in the seawater BPM exceeds that of the KHCO3 BPMs. Notably, the initial linear takeoff of the KHCO3 BPMs at 0.4 V of membrane potential, along with the intriguing intersection and crossover between the seawater and (bi)carbonate polarization curves at ~0.75 V, represent heretofore unexplained phenomena that the model is capable of replicating with high accuracy. To uncover the origin behind these phenomena and deconvolute the individual contributions to the polarization curves of BPMs in reactive carbon solutions, the rates of the individual contributions to current density were calculated within the CL of the BPM. Within the CL, the current density must either be due to the crossover of unreactive co-ions (K + , Na + , or Cl -), or to the presence of electric-field-enhanced, net-charge-generating dissociation reactions. These contributions to the overall polarization curve are determined and shown in Figure and for BPMs operating in 1 M KHCO3 and simulated seawater, respectively. As expected, the current density for the simulated seawater case is primarily driven by salt crossover at low potentials and water dissociation at high potentials, as shown in previous literature. The 1 M KHCO3 case, however, is more intriguing. While previous studies have suggested that the low applied potential current onset for BPMs in weak buffer electrolytes is entirely driven by titration currents resulting from dissociation of the weak acid buffer (i.e., the HCO3 -anion in this case) in the CL, the model suggests that current density in the initial linear feature is still primarily dominated by field-enhanced water dissociation. Field-enhanced dissociation of the buffering anion accounts for up to 50% of the observed current density. Therefore, the model suggests that the use of 1 M KHCO3 buffer electrolytes forces an early onset of the electric-field-enhanced water-dissociation reaction. |
645def34fb40f6b3ee74071c | 18 | The simulations demonstrate that the accelerated current onset for KHCO3-exchanged BPMs is largely due to a reduction in the rate of interfacial recombination due to reaction of waterdissociation-generated OH -with HCO3 -to form CO3 via what is essentially an indirect HCO3 - dissociation pathway (water dissociation followed by bicarbonate to carbonate interconversion), and that the eventual increase of the seawater current density beyond the KHCO3 current densities is due to the seawater BPM possessing a larger electric field at a given membrane potential (see Section S6 for more detail regarding the reasoning for the observed curvature of the BPM polarization curves in these carbon containing electrolytes). |
645def34fb40f6b3ee74071c | 19 | The validated model was employed to resolve local concentrations and microenvironments within BPMs exchanged with the aqueous reactive carbon solutions. The simulated concentration profiles of HCO3 -, CO3 2-, CO2, and H + within a BPM exchanged with 1 M KHCO3 at applied potentials of 0 V, 0.2 V, 0.5 V, and 1.0 V, are shown in Figure . Concentration profiles for OH -as well as local pH can be found in Figure -15. At equilibrium (0 V), both electrolyte boundary layers are fixed at their equilibrium concentrations for all species. Furthermore, within the BPM, there are no concentration gradients, and the concentrations are consistent with those determined by Donnan equilibrium with the interstitial pore volume. Because of Donnan equilibrium with the catholyte, the CEL is fully exchanged with K + cations (Figure -17), and the AEL is fully exchanged with (bi)carbonates. Therefore, the pH in the CEL at equilibrium is near neutral, and that in the AEL is alkaline (~pH 9) (Figure ). |
645def34fb40f6b3ee74071c | 20 | As the applied potential increases, the pH within the CEL decreases significantly due to the generation of H + via water dissociation and bicarbonate dissociation. Consistent with the equilibria presented in equations ( )-( ), as the pH in the CEL decreases, the concentrations of OH -, CO3 2-, and HCO3 -decrease, and the concentration of CO2 in the CEL increases. Additionally, diffusion gradients manifest in K + to maintain electroneutrality with the water-dissociationgenerated H + . Importantly, the model enables spatial resolution of in-situ CO2 regeneration, which is shown to occur only at the CEL|cBL interface, because the concentration of (bi)carbonates are too small within the CEL itself (due to Donnan exclusion) to facilitate reaction with waterdissociation-generated H + . The H + from water dissociation exits out of the CEL and reacts with HCO3 -in the electrolyte to form CO2 that reaches its solubility limit, becomes saturated, and bubbles out, as is consistent with the experimental visual observation (Supporting Video S1) and prior studies of BPMs operated in KHCO3 solutions. On the AEL side, pH increases do not occur as readily with increasing cell potential, because the AEL is fully (bi)carbonate exchanged at equilibrium and the presence of (bi)carbonates in these high (~1-2 M) concentrations buffer changes in pH and pOH. This is important to note, because the pH gradient in these systems is typically assumed to be directly proportional to the electrostatic potential profiles (Figure -23). However, these results demonstrate that the weak-acid buffer breaks the scaling relationship between the applied potential and the developed pH gradient by competitively consuming generated OH -anions. |
645def34fb40f6b3ee74071c | 21 | Nonetheless, past potentials of 0.5 V, the pH does increase within the AEL. Past this potential, the concentration of dissolved CO2 decreases significantly due to equilibrium reactions with waterdissociation-generated OH -and is essentially entirely consumed within the AEL. Concurrently, HCO3 -is consumed to form CO3 2-, so the concentration of HCO3 -in the AEL decreases, and that of CO3 2-increases at high applied potentials. Once essentially all of the HCO3 -in the AEL has been consumed (V > 0.7 V), the pH in the AEL increases much more rapid, and the generated OH - leaves the BPM and reacts with HCO3 -anions in the anolyte. While the concentration of HCO3 - in the AEL tends to zero at 1.0 V of applied potential, the concentration remains near the bulk concentration in the electrolyte, demonstrating that there is an abundance of reactive (bi)carbonate to consume water-dissociation-generated OH -anions. It is important to note that while in-situ CO2 regeneration on the cathode side occurs at the CEL|cBL interface due to exclusion of (bi)carbonates from the CEL, the regeneration of the CO3 2-sorbent from the HCO3 - occurs within the entirety of the AEL, CL, and anolyte domains. Because HCO3 -can exchange into the AEL and CL, the direct field-enhanced dissociation of HCO3 -anions in the CL, as well as the reaction of HCO3 -with water-dissociation-generated OH -throughout the CL, AEL, and anolyte, regenerate the CO3 2-carbon capture sorbent. |
645def34fb40f6b3ee74071c | 22 | A similar analysis of concentration profiles was also performed for BPMs immersed in simulated seawater (Figure -21, S24-25). The model shows very similar behavior as for the 1 M KHCO3 case with respect to the equilibria. However, due to the incredibly low concentrations of reactive carbon in the seawater electrolyte, the BPM is primarily exchanged with Na + or Cl -for the CEL or AEL, respectively. At the CEL side, the catholyte is rapidly mass-transfer limited by HCO3 -, impeding the formation of CO2 via the in-situ regeneration mechanism. However, on the AEL side, because there is a larger reactive volume of HCO3 -wherein both the AEL and anolyte contain reactive HCO3 -, the mass-transport limitations are less severe for the regeneration of the CO3 2-, suggesting that the BPM-ED system for DOC is more effective for mineralization than for CO2 release. |
645def34fb40f6b3ee74071c | 23 | Lastly, because the equilibrium reactions are key to both the CO2 and CO3 2-regeneration, the ratio of the local forward and backward rates of each reaction (1)-( ), representing the deviation of these reactions from equilibrium, were plotted throughout the modeled domain for both the 1 M KHCO3 and simulated seawater cases (Figures ). Interestingly, owing to their fast kinetics, reactions (1), (3), and (5) reach equilibrium throughout the entire domain, except for within the CL. However, reactions ( ) and ( ) significantly deviate from equilibrium within the catholyte where CO2 regeneration occurs. Indeed, they favor their reverse direction, demonstating that the water dissociation H + flux drives these reactions out of equilibrium towards regenerating CO2. This analysis demonstrates the importance of the BPM pH swing in driving buffer reactions within the ion-conducting polymer domains. |
645def34fb40f6b3ee74071c | 24 | While an understanding of the local concentrations and environments within the BPM provides valuable information regarding the phenomena occurring within it, knowledge of the fluxes of the various ionic species in these membranes is just as crucial. The primary charge carrier within an ion-exchange membrane dictates its conductivity, and understanding charge transport through the BPM is key to mitigating detrimental phenomena such as salt crossover. Plotting the effective transference numbers, defined here as the fraction of current carried by a given species, of all ions as a function of applied potential throughout the BPM (Figure ; Figure -32) reveals the nature of charge transport within the BPM operated with 1 M KHCO3. At low applied potentials (0 V and 0.2 V), the current density is primarily due to co-ion crossover (Figure ) (K + crossing from the anolyte to catholyte or HCO3 -crossing from the catholyte to anolyte), with K + the dominant carrier of current, due to its higher diffusion coefficient in the BPM. As the applied potential is increased and electric-field-enhanced water dissociation occurs, H + rapidly becomes the primary charge carrier in the CEL (Figure ). For the AEL, on the other hand, CO3 2-becomes the primary charge carrier at moderate potentials (0.4 -0.7 V), and HCO3 -possesses a negative effective transference number (Figure ). This flux behavior is consistent with the interpretation that HCO3 -transports as a counter-ion through the AEL to the CL, where it is either directly dissociated by the large electric field or reacts with waterdissociation-generated OH -to form CO3 2-that can then transports directly out of the CL and serves as the major carrier of charge in the AEL. At higher potentials (> 0.7 V), where HCO3 -is depleted, OH -becomes the primary charge carrier in the AEL (Figure ). Interestingly, H + and OH -do not immediately become the primary carrier of charge in the BPM when water dissociation begins at ~0.5 V, and there appears to be a penetration depth moving out from the CL through which the fronts of H + and OH -flux dominate. This penetration depth is dependent on the current density and is due to out-of-equilibrium buffer reactions that convert the generated OH -or H + to alternative charge carrying ions within the BPM domain. At large enough current densities, the buffering species within the BPM are depleted, and H + and OH -become the dominant charge carriers throughout the entirety of the BPM. This flux analysis was also carried out for the simulated seawater case and similar trends were observed (Figures ). However, due to the small concentration of reactive carbon in the seawater, OH -and H + much more rapidly become the primary charge carriers in the BPM. |
645def34fb40f6b3ee74071c | 25 | Taken together, this knowledge suggests that operation of BPM-ED systems at high current density could be beneficial for several reasons. At higher current densities, salt crossover is less extensive due to the dominance of H + and OH -as charge carriers. Additionally, at these higher current densities, OH -fully replaces the initially exchanged (bi)carbonates as the charge carrying species in the AEL, which reduces ohmic losses due to the higher mobility of OH -. 60 |
645def34fb40f6b3ee74071c | 26 | Key to any carbon-capture process is understanding the efficiency and energy requirements for operation at various rates. One key promise of EMCC processes is that they could potentially enable lower energy intensities than thermal processes. 2 Thus, the simulation was employed to determine the coulombic efficiency (Figure ) and energy intensity (Figure ) for CO2 regeneration as a function of applied current density. Coulombic efficiency is defined as the fraction of the applied current density that goes toward regenerating CO2 in the catholyte boundary layer and can be thought of as the product of the water dissociation efficiency, and the efficiency of reacting water-dissociation-generated H + with HCO3 -to form CO2. |
645def34fb40f6b3ee74071c | 27 | For all electrolytes tested (0.5 M KHCO3, 1 M KHCO3, and simulated seawater), the coulombic efficiency initially is 0% due to the dominance of salt-crossover at low current densities. Nonetheless, the efficiency rapidly increases to ~80% for all electrolytes due to the onset of the water dissociation reaction. Crucially, water dissociation in all electrolytes rapidly reaches ~99% efficiency (Figure ), consistent with prior study of the Fumasep BPM, meaning that deviations coulombic efficiencies lower than 99% are due to inefficiencies in conversion of the generated H + . For the simulated seawater BPM, the coulombic efficiency monotonically decreases as current density increases. The rapid decrease of coulombic efficiency for simulated seawater is due to complete consumption of the meager 0.00211 M HCO3 -in the catholyte (i.e., the CO2 regeneration is HCO3 -limited). For the simulated seawater, once all the HCO3 -in the catholyte diffusion boundary layer is consumed, the H + flux from the BPM exits the boundary layer unreacted, acidifying the bulk electrolyte. The impact of unreacted H + can be seen due to the effective transference number of H + in the catholyte reaching 1 at high current densities (Figure ). It is important to note that the coulombic efficiency defined herein is the coulombic efficiency of H + liberating CO2 within the diffusion boundary layer. Unreacted H + could theoretically react with further dissolved inorganic carbon to liberate greater amounts of CO2 by feeding more electrolyte downstream. However, in this work, we seek to consider just the fluxes in the direct vicinity of the BPM to explore the effects of mass transfer at the micrometer scale. |
645def34fb40f6b3ee74071c | 28 | This effect may vary near the terminal electrodes in a BPM-ED stack depending on the consumption of protons and hydroxides by the redox reactions. For the 0.5 M and 1 M KHCO3 electrolytes, the coulombic efficiency similarly decreases after achieving a maximum of ~80%. However, instead of being HCO3 -limited, the reduction in coulombic efficiency at intermediate current densities is limited by the consumption of H + by CO3 2-to form HCO3 -(Figures S38-S39, Section S17). Once all of the CO3 2-in the catholyte has been consumed, there is an inflection point in the coulombic efficiency at which the CO2 regeneration increases again, achieving higher coulombic efficiencies approaching 90% for 1 M KHCO3 and 95% for 0.5 M KHCO3 at current densities approaching 100 mA cm -2 . The coulombic efficiencies reported in these electrolytes are also consistent with prior work by Eisaman et al. Between 0.5 and 1 M KHCO3, 0.5 M KHCO3 possesses a higher coulombic efficiency at high current density due to its lower bulk pH, which inhibits the loss of water dissociation generated H + to H + -OH -recombination and H + -CO3 2-recombination relative to the 1 M KHCO3 case. The above analysis demonstrates that the regeneration of CO2 in the diffusion boundary layer is dictated by complex interactions between H + , HCO3 -, and CO3 2-and their associated buffer kinetics. |
645def34fb40f6b3ee74071c | 29 | Considering the energy intensity for these processes, the simulated seawater BPM requires a larger energy intensity at high current densities due to mass-transport limitations associated with the low concentrations of dissolved carbon species and the resulting low coulombic efficiency. However, at low current densities, where the flux is well matched to the concentration of (bi)carbonates, the energy intensity for CO2 generation by the BPM is < 100 kJ mol CO2 -1 . Further, the simulated energy intensity for CO2 desorption at 3.3 mA cm -2 is 108 kJ mol CO2 -1 , which agrees quite well with the 155 kJ mol CO2 -1 experimental reported by Digdaya et al., when considering the energy requirement in that report included losses associated with a ferro/ferricyanide redox couple at the working and counter electrodes (~40 kJ mol CO2 -1 at an equivalent current density). |
645def34fb40f6b3ee74071c | 30 | Conversely, the 1 and 0.5 M KHCO3 cases possess lower energy intensities due to higher concentrations of HCO3 -anions in the catholyte boundary layer for reaction, maintaining energy intensities near 100 kJ mol CO2 -1 even at current densities of 100 mA cm -2 . 0.5 M KHCO3 possesses a lower energy intensity due to its higher coulombic efficiency at high current densities. These energy intensities compare well with thermal processes, and the current densities achieved far exceed those demonstrated for other EMCC processes. The coulombic efficiency of sorbent regeneration, in terms of the transference number for CO3 2-at the anolyte boundary, is nearly 100% for 1 and 0.5 M KHCO3 (Figure ). As stated previously, this is likely due to an increased space time for HCO3 -conversion into CO3 2-. The ability for the AEL to exchange with both HCO3 -and water-dissociation-generated OH -enables a high activity for these ions to react and form CO3 2-in the anolyte boundary layer and within the AEL and CL. The buffer reactions in the simulated seawater BPM are still limited by HCO3 - concentration, but the conversion efficiency remains > 30% at all current densities (Figure - |
645def34fb40f6b3ee74071c | 31 | Lastly, we note that a BPM implemented in a carbon capture loop as shown in Figure , will likely operate with a gradient in pH and dissolved carbon species, as the electrolyte fed back to anolyte side will be slightly acidified and have some carbon removed. Thus, the simulation was run with a gradient across the BPM in both pH and dissolved inorganic carbon, wherein the catholyte was fed with 1 M KHCO3 (pH = 8.3) as before, but the anolyte was fed with 0.9 M KHCO3 (pH = 7.5) to represent 10% stripping of carbon by BPM-ED and the following CO2 stripping process. As shown in Figure -43, the choice to operate under this gradient has no effect on the energetics or efficiency for CO2/CO3 2-recovery, except at very low current densities (i < 10 mA cm -2 ) due to a large diffusive flux at 0 V associated with the imposed concentration gradient across the BPM. All told, the high coulombic efficiencies and low energy intensities (competitive with thermal desorption) demonstrated indicate that BPM-ED possesses promise for application as a carbon-removal technique. |
645def34fb40f6b3ee74071c | 32 | Management of bubbles is key in the development of electrochemical devices, and this is especially true for carbon-capture devices that must mediate the generation of CO2 gas from an aqueous (bi)carbonate electrolyte. Work by Diederichsen et al. identified bubble management as a key challenge in continuous EMCC systems due to power losses that occur from loss of electrochemically active surface area due to CO2 bubbling. The model presented here enables a simulation of the bubble coverage on the CEL, as well as an understanding of how the bubble coverage affects the energy requirements for BPM-ED EMCC. |
645def34fb40f6b3ee74071c | 33 | First, it is important to note that due to mass-transport limitations in the seawater case as discussed above, bubbling only occurs for the BPMs exchanged with KHCO3. Analysis of the bubble coverage in these simulations (Figure ) shows that the bubble coverage of the BPM exceeds 30% at high current densities due to super saturation of the electrolyte. This high bubble coverage is consistent with visual observation of the CEL during operation (Supporting Video S1). Additionally, the analysis of the energy requirements with and without losses due to bubble coverage reveals that bubble effects account for nearly 10 kJ mol -1 of energy loss (Figure ) for both KHCO3 cases at 100 mA cm -2 . Therefore, managing bubbles is indeed crucial to improving energy efficiencies for BPM-ED EMCC. Previous knowledge from water electrolysis shows that bubble coverage losses can be ameliorated by controlling flow rate, increasing gas headspace pressure, or by employing a surfactant to reduce surface tension and bubble size. The BPM morphology and surface chemistry can also be modulated to alter the wettability and coverage of bubbles on the surface. To highlight the importance of bubble management, further experiments were performed. |
645def34fb40f6b3ee74071c | 34 | In these studies, the BPM-ED system was run with various electrolyte flowrates, and the extent of CO2 bubbling was quantified by determining the noise in the measured electrostatic potential as calculated by the standard deviation in the measured transmembrane potential (details on these experiments can be found in Section S21). Figure compares standard deviation of potential as a function of applied current density for three different flow rates (0.2, 1, and 5 mL min -1 ). As shown in the figure, the extent of bubbling is much more severe for lower flowrates, and the onset of bubbling occurs at lower current densities, as predicted from the model. Indeed, in the presence of bicarbonate (Figure ), the average voltage is highest for the lowest flowrate, indicating that an increase in bubble coverage under reduced flows leads to decreased surface area and increased resistance. This experimental data further supports the importance of flowrate and bubble mitigation on maintaining low cell voltages when operating BPMs for carbon removal due to the surface area losses driven by CO2 bubble formation on membrane surfaces. |
645def34fb40f6b3ee74071c | 35 | Because the energetic requirements of BPM-ED EMCC are critical to the optimization and implementation of these systems, voltage contribution analysis of the BPM operating in 1 M KHCO3 and simulated seawater were conducted to determine the major sources of power loss in the BPM-ED system. As expected, the thermodynamic potential required to drive the pH gradient across the CL via electric-field enhanced water dissociation (Section S1.2) makes up the largest contribution to the applied potential. The next largest potential loss in the system is the kinetic overpotential for the water dissociation in the CL (Section S1.2). Unfortunately, while the thermodynamic requirements cannot be altered, the kinetic overpotential can be lowered by employing a better water-dissociation catalyst. For the 1 M KHCO3 case, bubble coverage losses also make up a significant portion of the applied potential, particularly at high current densities, and those could be managed by better controlling flow as discussed in section 3.5 or through other bubble-mitigation strategies. Lastly, ohmic losses through the CEL and AEL are quite low relative to other losses, but do increase with current density and will likely become critical if the BPM-ED system is to operate at currents approaching 1 A cm -2 . Although minor, these losses can be decreased by improving ionic conductivity of the AEL and CEL. This appliedvoltage breakdown highlights that the greatest areas for improvement in the BPM-ED performance are in improving the water dissociation catalyst and in implementing bubblemanagement strategies. |
645def34fb40f6b3ee74071c | 36 | Prior studies have demonstrated that the increase of the ion-exchange capacity (IEC) or fixedcharge in the BPM can enhance the rate of water dissociation by increasing the electric field between the AEL and CEL. As shown in Figures , increasing the IEC does indeed improve the rate of water dissociation significantly, and consequently the rate of CO2 flux at a given applied potential. Improving the performance of the water-dissociation catalyst has also been proven to be key in reducing energy requirements for BPM operation. As such, the model was run with varying values of 𝛼 hi , which represents the sensitivity of water dissociation to the electric field. The results show that performance of the BPM in reactive carbon solutions is indeed incredibly sensitive to the catalyst activity (Figure ). Taken together, these results indicate that improving bubble removal from the BPM su rface, increasing the IEC of the ionomers, and using a better WD catalyst would drastically enhance the performance of BPM-ED EMCC. kJ mol CO2 -1 , even at current densities at or exceeding 100 mA cm -2 for the 0.5 and 1 M KHCO3 electrolytes. Applied-voltage breakdowns (Figure ) demonstrate that these improvements are due to improved ionic conductivity due to the enhanced IEC, as well as significantly reduced overpotentials for water dissociation due to the high IEC and improved catalytic behavior, and the assumed mitigation of bubble losses in the system. |
645def34fb40f6b3ee74071c | 37 | The above calculations correspond to the energy intensity of a single BPM unit in an overall stack, and the number of BPMs in the stack will be key to dictating efficiency, especially when considering the electrode potentials of the anode and cathode. As shown in Figure , considering the effect of the anode and cathode potentials does increase the energy intensity of the process quite significantly (from 88 kJ mol CO2 -1 to 176 kJ mol CO2 -1 at 100 mA cm -2 ). However, as the number of BPMs in the stack increases, the energy intensity rapidly approaches that of the single BPM unit simulated herein. For a BPM-ED stack containing ten of the Optimal BPMs, the theoretical energy intensity is still below 100 kJ mol CO2 -1 at 100 mA cm -2 (see Section S24 for the details of this calculation). Ultimately, this analysis reveals the exciting result that newly developed BPMs have achieved such substantial strides in water-dissociation catalysis that CO2 desorption via BPM-ED could be achieved at energy intensities lower than 100 mA cm -2 even for current densities exceeding 100 mA cm -2 , making this technology quite promising for carbon removal when considered in a BPM-ED stack configuration. |
645def34fb40f6b3ee74071c | 38 | Electrochemically mediated carbon capture strategies have the potential to displace thermal desorption techniques due to their ability to operate with lower energy requirements at ambient temperatures and pressure. Bipolar-membrane electrodialysis (BPM-ED) is a promising technique that uses the H + and OH -fluxes generated by electric-field-enhanced water dissociation in the BPM to drive the release of CO2 and the recovery of CO3 2-from an electrolyte containing reactive carbon species simultaneously. Herein, we developed simulations, which closely match experiments, and resolved the rates of the various kinetic and transport processes (field enhanced water or bicarbonate dissociation, homogeneous buffer reactions, salt crossover, etc.) occurring within BPMs immersed in three reactive carbon solutions relevant to carbon capture: 1 M KHCO3, 0.5 M KHCO3, and simulated seawater. Simulations reveal that an early onset in observed current density for (bi)carbonate exchanged BPMs is due to field enhanced dissociation of the bicarbonate anions as well as a reduction in H+-OH-recombination due to competitive reaction of OH-with HCO3 -to form CO3 2-indirectly. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.