id
stringlengths
24
24
idx
int64
0
402
paragraph
stringlengths
106
17.2k
629df71e97e76a377cc7f06e
3
For example, in comparing capacitive deionization and reverse osmosis (RO) for brackish water desalination, pretreatment and brine disposal were excluded from one study because of their high sensitivities to variables (feedwater composition and geographical location, respectively) that were not relevant to the objectives of the study. Excluding these unit operations allowed the researchers to focus on core comparisons between capacitive deionization and RO without introducing additional unknowns that were expected to be independent of desalination technology choice (e.g., brine disposal method). Therefore, the definition of the conceptual system can be an iterative process to maintain focus on unit processes that are relevant to the sustainability indicators of interest or the targeted insight.
629df71e97e76a377cc7f06e
4
Depending on the objectives of the QSD study (e.g., the number of technologies to be evaluated), the scale of the system can vary from a single unit process to a network of multiple subsystems across different disciplines and industries. Similarly, resolution of the design, simulation, and analyses can be tailored to the different unit processes within the system. For example, a study optimizing the detailed design of RO technology focused on the selection and design of equipment that is directly related to the process (e.g., membrane units, pumps), while research that investigated management strategies for shale gas water and wastewater required researchers to account for multiple processes from water source to final disposal, including water acquisition, transport, storage, and wastewater treatment. Additional examples of QSD objectives and corresponding system boundaries can be found in Table (ESI).
629df71e97e76a377cc7f06e
5
Decision variables may include system configurations, detailed design decisions for individual unit operations, operating conditions, and end-of-life options. For established technologies, values and distributions of the decision variables are typically derived from literature and/or industry. For example, while RO systems can operate across a broad range of contaminant rejection levels, analyses of RO for micropollutant treatment (e.g., endocrine disrupting compounds, antibiotics, pharmaceuticals) will likely evaluate higher and narrower ranges of rejection than those evaluated in seawater treatment. However, for early-stage technologies with lower technology/manufacturing readiness levels (TRLs/MRLs), values of the decision variables may be highly uncertain as designers and/or operators characterize the landscape of technical feasibility, thereby necessitating the evaluation of wide ranges for decision variables and robust consideration of uncertainty in QSD execution (Section 5.2).
629df71e97e76a377cc7f06e
6
Technological parameters are parameters intrinsic to a technology's design and operations (e.g., material properties, reaction coefficients). Unlike decision variables, values of technological parameters are subject to the technology rather than the designer or the operator. For instance, the design and simulation of a biological treatment process may require an assumption of the maximum specific growth rate (a technological parameter), which is intrinsic to the microorganisms. Notably, if a parameter is not intrinsic and can instead be calculated (e.g., cell growth rates can be calculated using maximum specific growth rates, temperature, and relevant constituent concentrations), this parameter should be modeled through algorithms rather than being included as an independent input. Similar to decision variables, magnitudes of technological parameter uncertainty can be related to TRLs/MRLs, with early-stage technologies having larger uncertainty due to the lack of knowledge. However, one can leverage theoretical values or technological limitations to constrain parameter uncertainty during QSD execution, and the results of QSD can be used in turn to set targets for technological parameters in research and development (Section 5.3). For instance, 100% can be used as the theoretical upper limit for contaminant removal, or an effluent P around 0.3-1 mg•L -1 may be assumed as the performance limit for mainstream enhanced biological phosphorus removal to account for the current state of technology. 74
629df71e97e76a377cc7f06e
7
Contextual parameters represent non-technological values that influence the sustainability of technologies, especially at the deployment stage. These parameters are intended to capture the specific circumstances in which the technology would be deployed. They reflect the local and regional nature of primary stressors to humans and the environment, accounting for economic (e.g., tax rates, unit costs), environmental (e.g., ambient temperatures), social (e.g., household size), political (e.g., regulations), and other conditions in which the system exists. As one example, outcomes associated with energy-intensive technologies may be especially sensitive to the characteristics of the local electricity grid, which can vary widely across and within countries.
629df71e97e76a377cc7f06e
8
For instance, while industrial users in the United States paid a mean of $0.06 per kWh of electricity in March 2020, average state prices ranged from $0.04 (Oklahoma) to $0.26 (Hawaii). Similarly, variations in the local electricity fuel source mix can substantially influence greenhouse gas (GHG) emissions, with the emissions of electricity supplied by the Bonneville Power Administration (0.081 kg CO2 eq•kWh -1 ) being an order of magnitude smaller than those of the Ohio Valley Electric Corporation (1.0 kg CO2 eq•kWh -1 ). Additionally, although contextual parameters are independent of the technology being evaluated, they may nonetheless directly or indirectly affect the technical performance of the system. For example, solar irradiance could directly impact the efficacy of sunlight-mediated disinfection technologies, while local and national regulations could indirectly affect the performance of waste sludge treatment systems through the set legal limits (e.g., how much sludge can be accepted by a landfill facility). For contextual parameters, the resolution of the data is an important consideration. While higher resolution data (i.e., data being more specific to the deployment site) will yield results that more accurately reflect the deployment context, researchers may also be limited by data availability or the feasibility of new data collection. Generally, objectives of the QSD study will determine the appropriate resolution of contextual parameters (e.g., locality-specific, regional, nationally, or internationally representative averages), and additional analyses can be performed to assess how uncertainty in these parameters impacts system sustainability. In particular, spatial analysis with integrated Geographic Information Systems (GIS) can provide location-specific contextual parameters for more tailored conclusions on the sustainability of deployed technologies (Section 5.3).
629df71e97e76a377cc7f06e
9
After defining the problem space, the next step in QSD is to automate the process of translating QSD inputs into a system inventory (i.e., all mass and energy flows entering and leaving the system), which can be accomplished by establishing mathematical representations of the system across its life cycle. This step in QSD leverages design algorithms and process algorithms, where design algorithms correspond to the construction and end-of-life stages and process algorithms correspond to the operation and maintenance stage. Both design and process algorithms are linked to QSD inputs such that the system performance -including mass and energy balances across life cycles -responds to changes in decision variables, technological parameters, and contextual parameters. For example, in designing a roadway drainage system, the choice of the stormwater conveyance element (grass swales, bioswales, or storm sewers) would be a discrete decision variable connecting with design algorithms to determine the mass of materials needed for construction (Table , Example 1). Similarly, technological parameters related to contaminant removal in grass swales and bioswales would connect with the contaminant mass balance using process algorithms. The following section illustrates the process of developing simulation algorithms to ensure there are mathematical connections between QSD inputs and indicators of interest.
629df71e97e76a377cc7f06e
10
Design algorithms are sets of equations for the construction (e.g., equipment sizing, material selection) or end-of-life (e.g., disposal, salvage) stages of unit processes within the system. These algorithms can vary in complexity. For example, a simple design algorithm can scale the dimensions or number of fermenters based on the required volume, but a more complex model may include the specification of fermenter height, wall thickness, and weight, which are calculated using factors such as aspect ratio, fractional weld efficiency, reactor pressure, and material properties. Compared to construction, the relative importance of the end-of-life stage can be minor and is often excluded from the analysis (e.g., for wastewater treatment facilities ).
629df71e97e76a377cc7f06e
11
Process algorithms are used to calculate the mass and energy flows throughout the system during the operation and maintenance stage. These flows can be used to gauge the performance of the technology (e.g., calculating contaminant removal) and to determine the relevant inventory items required for sustainability characterization (e.g., raw chemicals consumed, generated products and wastes, electricity consumption, and fugitive emissions such as CH4, CO2, and N2O). Like design algorithms, the complexity of process algorithms can vary widely: from assumed performance based on theoretical values to calibrated and validated mechanistic models. Given the importance of algorithm selection in QSD, more nuanced guidance is provided in Section 3.2 (below).
629df71e97e76a377cc7f06e
12
Regardless of their type (i.e., design vs. process), the complexity of the algorithms should be tailored to the objectives of the QSD study. Additionally, algorithm complexity can also be constrained by data availability. For technologies with high TRLs/MRLs (7+), data are generally more abundant as these technologies have been applied across more diverse contexts.
629df71e97e76a377cc7f06e
13
In general, more complex algorithms can be more responsive to QSD inputs, but the complexity of an algorithm does not inherently correlate with usefulness or accuracy. As the algorithm becomes more mechanism-driven and complex, it may require additional or larger datasets for calibration, validation, and prediction, which may not be readily available. Its accuracy may also be affected by parameter uncertainty and missing mechanisms that introduce bias and skew results, and increased complexity will increase demand for computational resources. For example, one study used a complex model for risk evaluation of radioactive waste disposal composed of 286 sub-models and thousands of parameters, but its utility was undermined by the large uncertainty of a single key variable. In contrast, the Arrhenius equation, an empirical equation describing the correlation between a reaction rate constant and temperature, consists of only two reaction-specific parameters (the reaction activation energy and the pre-exponential factor), yet it is arguably one of the most widely used models in thermodynamics.
629df71e97e76a377cc7f06e
14
For technologies with a TRL/MRL of only 1-2 (observation of basic principles and formulation of concept), theoretical values or "back-of-the-envelope" estimations can be used. For example, a 100% thermodynamic conversion efficiency could be used for mass flow simulation, and industry averaged capital or utility costs from a similar technology could be used as an early estimate for capital and operating costs. If "proof-of-concept" studies have been conducted
629df71e97e76a377cc7f06e
15
(TRL/MRL 3-4), existing designs or experimental data can be used for more realistic design and simulation of the system. When more designs and data become available with prototype implementation (TRL/MRL 5-7), engineering heuristics (e.g., the choice of separation technology, typical reactor aspect ratios 66 ) and empirical models can be used to establish more mathematical connections between QSD inputs and the system inventory. For technologies with higher TRL/MRL (7+), these connections may be able to be mathematically represented based on first principles using mechanistic models in design and process algorithms. As examples, electricity consumption of a pump can be calculated using fluid flowrate, total dynamic head, and pump efficiency, and biomass growth can be modeled using metabolic reactions and fluxes. Notably, regardless of the level of complexity of the selected algorithms, uncertainty of the algorithm inputs and parameters should be considered; this uncertainty could be particularly important for earlystage technologies where the algorithms often rely on estimations, theoretical values, or very limited experimental data.
629df71e97e76a377cc7f06e
16
Overall, the complexity of design and process algorithms will dictate the connections (or the lack thereof) among specific QSD inputs and the system inventory, the latter of which directly influences system sustainability and the types of insight that can be generated. Consequently, the objectives of a QSD study can also inform the selection of algorithms. For example, when the objective is to evaluate a technology's site-suitability (e.g., sanitation-based resource recovery systems; natural gas production; 93 bioenergy with carbon capture and storage, BECCS ), it will be essential to use a model which incorporates geo-spatial data and corresponding contextual parameters, but complex algorithms with many technological parameters may not be necessary. Alternatively, when the objective is to identify and elucidate the sustainability drivers for research prioritization (e.g., electrode specific capacitance and contact resistance in capacitive deionization technologies; 95 CO2 capture ratio in BECCS technologies ), mechanistic algorithms can provide more pertinent insight. ) and data availability. The example system centers on the biological conversion of a substrate (S) to a product (P). For technologies with a higher TRL/MRL and more robust experimental data sets, algorithms that are more mechanism-driven can be developed to more explicitly represent underlying interactions within the system and elucidate hidden connections between QSD inputs and system sustainability. However, the availability of data should not dictate the choice of algorithms, as mechanism-driven models, which often have higher levels of complexity, can also require more computational resources and may introduce additional sources of uncertainty that are not relevant to the objectives of a QSD study. The illustration of algorithm complexity, system interactions, and computational intensity (bottom of the figure) for each type of the algorithms are solely for comparative purposes. In practical applications, they depend on multiple factors (e.g., the actual mechanisms) and could deviate from the typical ranges shown here.
629df71e97e76a377cc7f06e
17
After generating the system inventory, the next step in QSD links the mass and energy flows across the system's life cycle with quantitative sustainability indicators, the values of which can be used to track the progress toward or away from the stated project goals. This section discusses techniques to characterize system sustainability with respect to economic, environmental, human health, and social dimensions. These four categories are adapted from the tripartite conception of sustainability (economic, environmental, and social dimensions 97 ), explicitly considering human health as its own dimension to highlight its significance in the context of environmental technologies and its distinct characterization techniques. Generally, sustainability characterization follows the steps of indicator selection, indicator evaluation, and
629df71e97e76a377cc7f06e
18
(optionally) indicator prioritization. In the first step, sustainability indicators are selected based on the objectives of the QSD study and availability of the resources (e.g., data). Next, impact of the system inventory on each indicator is quantified. Finally, if desired, the indicators within each dimension can be ranked or aggregated (e.g., through weighted sum) to prioritize critical indicators and reduce the number of indicators from which to draw insight.
629df71e97e76a377cc7f06e
19
Economic analysis is an essential component of QSD for systems where costs often govern decision-making. Traditional cost and profitability indicators, such as investment cost, operating and maintenance costs, payback period, return on investment (ROI), and net present value (NPV), are commonly used in the literature. To compare products that are of equivalent utility but that were generated from different technologies, 100 indicators such as levelized cost or minimum selling price (MSP) are often used. Both of these indicators quantify the average net present cost of a product over a system's entire lifetime, but they are typically used in different contexts: levelized cost is usually used in the context of energy (e.g., electricity) 101 and (more recently) water systems, whereas MSP is often used in the context of biorefineries. The selection and prioritization of economic indicators will depend on stakeholder preferences. For instance, technology developers may focus on levelized cost to compare against benchmark technologies (i.e., the conventional technologies against which they are competing) Two techniques, life cycle costing (LCC) and techno-economic analysis (TEA), are typically used for economic analysis (e.g., for LCC; 107-109 for TEA). Both techniques rely on the system inventory generated by system simulation. After linking those data to unit prices of cost inventory items, additional costs (e.g., labor, tax, insurance) are included with revenues from co-products (e.g., recovered nutrients) to obtain the total net cost of the system. This net cost can be further normalized to the unit cost of the system product/function (e.g., levelized cost or MSP).
629df71e97e76a377cc7f06e
20
During these calculations, costs and revenues in future years are discounted (using an interest rate) to account for the time value of money. In the case of LCC, costs and revenues over the lifetime of the system are converted to a common time (e.g., present value, 110 annual value 111 ) based on the preferences of decision-makers. 112-114 LCC generally centers itself on one or more actors (e.g., producer, consumer, waste management operator), depending on the objectives of the QSD study. LCC results can be reported for the project as a whole, or they can be normalized to the defined functional unit (a reference unit to quantify the performance of a system ). In contrast, TEA is typically used to determine the financial viability of the system through selling of the generated product(s), including the potential for acceptable risk and ROI. Specifically, a discounted cash flow rate of return analysis is often used to calculate a product's breakeven point, which is the point where the equivalent value of the sum of all cash flows at the base cost year equals zero (i.e., net present value = 0). At the breakeven point, the product cost or selling price is referred to as the leveraged cost or the MSP, and the discount rate is referred to as the internal (or investor's) rate of return (IRR). Appropriate IRR targets are typically set according to industryspecific standards based on the type of technology, the stage of development, and the level of risk associated with investment. For example, a 10% IRR is commonly used for projects within the bioenergy industry with "n th -plant" assumptions (i.e., for a mature industry where n plants have been established), but higher IRRs (e.g., 10-20%) may be required by investors in higher risk ventures (e.g., projects with significant regulatory and operational details yet to be resolved)
629df71e97e76a377cc7f06e
21
and IRRs below 10% may be acceptable for lower risk ventures (e.g., infrastructure debt). In addition to LCC and TEA, cost-benefit analysis (CBA) is also commonly used to support decision-making. However, unlike LCC and TEA that focus on the internal costs and benefits (i.e., revenues in the context of LCC and TEA) that are directly linked to the project, CBA includes external costs and benefits incurred by parties who are not directly involved in the project, 119 and it accounts for both market (e.g., labor and firm outputs) and non-market (e.g., changes in water quality) goods and services. 119,120 Consequently, CBA is more often used to assess whether a policy increases the total value of resources available to all members of society (e.g., the U.S.
629df71e97e76a377cc7f06e
22
To align societal development with the maintenance of resilient and accommodating environmental systems, it is critical to limit the environmental footprints of human activities within the "safe operating space" of planetary boundaries. Among these techniques, LCA is the most comprehensive and widely used to quantify the environmental impacts of a product (including goods and services) system throughout its life cycle.
629df71e97e76a377cc7f06e
23
LCA is comprised of four phases including goal and scope definition, inventory analysis, impact assessment, and interpretation. Depending on how the inventory analysis is performed, Similar to LCC, LCA leverages the system inventory and can be tailored to the life cycle stages of interest (e.g., cradle-to-grave, cradle-to-gate, cradle-to-cradle). The system inventory is compiled from the foreground system -the group of unit processes that can be controlled by designers or operators. However, LCA also requires the connection of system inventory to the sources of environmental impact in the background system that consists of processes over which designers and operators may not have direct influence (Table ). In the inventory analysis phase of LCA, the system inventory is converted to the life cycle inventory (LCI) by linking the system inventory to unit impacts incurred in the background system. For example, in operating a membrane reactor, the membrane modules may be replaced during maintenance, which contributes to the system inventory in the foreground. But in LCA, the emissions and raw materials required for the acquisition of these membrane modules (including raw material extraction, manufacturing, transportation, etc.), which are in the background, should also be accounted for to convert the system inventory into the system LCI. While the system inventory could be a specified number of membrane modules replaced at a specified frequency, the system LCI would include all mass and energy flows associated with these membrane modules across their life cycle. With regard to the life cycle impact assessment (LCIA) phase, many methods have been developed in recent years (e.g., ReCiPe, 139,140 TRACI 141-143 ). These LCIA methods provide characterization factors, which translate every individual emission or raw material requirement in the system LCI into normalized, quantitative environmental impacts. Depending on the LCIA method, these characterization factors can be developed at the midpoint or endpoint of the environmental impact cause-effect chain. Midpoints are located along the impact pathway for a particular impact indicator (e.g., kg CO2 equivalents for 100-year global warming potential). In the indicator prioritization step, midpoint characterization factors for different indicators can be aggregated into three main categories (i.e., ecosystem quality, resource scarcity, and human health). Additionally, varying groups of assumptions (e.g., time horizon) may be adopted to reflect the uncertainties and choices associated with these characterization factors. For example, based on the hierarchist perspective in ReCiPe, methane has a midpoint characterization factor of 34 kg CO2 eq•kg -1 for global warming potential, which can be further translated to the human health endpoint using a factor of 9.28×10 -7 disability-adjusted life years (DALYs)•kg CO2 eq -1 . 140 By normalizing LCIA impacts to the defined functional unit (e.g., 1 m 3 of drinking water treated to potable standards, 1 m 3 of wastewater treated to meet effluent permits ), these results can be used to compare environmental sustainability across systems. 117,118
629df71e97e76a377cc7f06e
24
While global human health-related impacts can be quantified with LCA (e.g., carcinogenics and non-carcinogenics in TRACI 141-143 ), LCIA methods often have toxicity models (e.g., USEtox in TRACI ) and embedded assumptions about the environmental fate of contaminants, human exposure, and human health effects (including dose response relationships). Consequently, the human health category within LCA includes modeled impacts spread over large spaces and time scales, and these "averaged" impacts do not explicitly consider localized effects from chemicals or pathogens that may be particularly relevant to project stakeholders. Therefore, when local health risks from exposure to chemical and microbial hazards are of concern, impact indicators (e.g., benchmark quotient of risk level 146 ) should be selected and evaluated with corresponding techniques (e.g., QRA). For example, if the objective of the QSD study is to guide early-stage technology research and development rather than local deployment, then it may be appropriate to focus on global human health impacts through existing impact categories embedded in LCA (e.g., as in ). In contrast, when QSD is used to inform site-specific decisions (e.g., planning sanitation system deployment in a specific community), QRA can be employed and local data should be collected for the results to be representative (e.g., measuring pathogen concentrations, quantifying local exposure ).
629df71e97e76a377cc7f06e
25
In this human health dimension, quantified indicators often include probability of infection, probability of illness, or DALYs, all of which can be calculated from one another with certain assumptions. Additionally, there have been methods developed to calculate monetized indicators based on the contribution of pollutants to mortality and the value of a statistical life (VSL; e.g., human health damage from freight transportation 150 ). However, this approach has not been widely adopted in the development of environmental technologies due to its shortcomings and controversies (e.g., how VSL is estimated). To quantify human health indicators, QRA can be performed following the steps of hazard identification, exposure assessment, dose response, risk characterization, and risk management. More specifically, to assess local human health risk due to pathogens, QSD inputs and process algorithms can be used to simulate pathogen concentrations, after which quantitative microbial risk assessment (QMRA) 155 can be performed to quantify the human health risk. QMRA can be performed with generic (e.g., using generic pathogen concentrations in feces for evaluating water and sanitation systems 156 ) or site-specific data (e.g., and Example 2 in Table ). Additionally, there are examples where QMRA is hybridized with LCA to characterize trade-offs between human health risk and environmental impacts. Similarly, when health risk related to chemicals (e.g., contaminants of emerging concern) is the focus, quantitative chemical risk assessment (QCRA) 165 can be applied. To facilitate the risk assessment, quantitative structure models (e.g., quantitative structure-activity relationship, QSAR; 166 quantitative structure property relationship, QSPR 167 ) can be used to predictively model the properties (e.g., toxicity, albeit with significant uncertainty 168 ) of interest, and certain semiquantitative techniques (e.g., CHEM21 guide 169 as applied in ) can be used as a screening strategy to exclude technologies that have significant safety concerns.
629df71e97e76a377cc7f06e
26
Social sustainability is generally intended to capture how stakeholders (e.g., technology users, regulators, governments) perceive and interact with technologies, which are often critical considerations to enable sustained adoption and long-term success of a project. ). Though proxies (e.g., transparency, income level 180 ) may be used to quantitatively represent qualitative social indicators, these approaches cannot fully capture the unique aspects of social sustainability (e.g., highly site-specific, unequal impacts on different societal groups).
629df71e97e76a377cc7f06e
27
To address these challenges, stakeholders can be engaged throughout QSD. When, how, and which stakeholders are engaged depends on the objectives of the QSD study and resource availability, and the engagement methods can range from low to high stakeholder input and influence (corresponding to the least to the most resource intensive). In the first step, indicators can be selected by domain experts (the lowest stakeholder input and influence option), selected or supplemented by stakeholders from pre-generated options or "master lists", 183-185 or generated by stakeholders through focus groups, photovoice, and interviews (the highest stakeholder input and influence). Likewise, during the indicator evaluation step, stakeholders can be engaged through data collection methods of varying stakeholder input and influence. For instance, some structured approaches rely on quantitative data from existing census data or with close-ended surveys (e.g., user cost, number of annual meetings, number of jobs ). These approaches are common in the technology development literature because they are often less resource intensive and enable data collection across multiple contexts with limited incremental cost (i.e., the cost of gathering data from another location or context is low). In contrast, approaches such as semi-structured interviews and case studies 199 are more open to stakeholder elaboration, and they are therefore more resource intensive. When time and resources permit, sample size of the interviews can be set to achieve thematic saturation, when the collection of new data does not shed any further light on the issue under investigation. Therefore, these approaches can collect more case-specific and qualitative data, which can inform quantitative data collection 201 and processing. For instance, these more engaging approaches can be used to adapt indicators to local contexts 203 or to support quantitative indicator scoring using methods such as the fuzzy set theory method. Finally, when needed, stakeholders can also be engaged in the indicator prioritization step. This can be performed in conjunction with MCDA to generate a single score or recommendation. When aggregating indicators, their weights can be determined by experts (the lowest stakeholder input and influence option), derived from archetypal schemes representing different societal groups, 206 or developed for the specific study by stakeholders that directly interact with the technology (the highest stakeholder inputs and influence option). Overall, as each stakeholder engagement method has limitations, the selection of stakeholders should be a careful process that balances resource requirements, social dynamics, resources than interviewing each of the individuals, may not reflect individual goals 209 and can suppress marginalized voices, 210 resulting in a lack of comprehensive goals, 186,211 influences, and decision criteria. Ultimately, substantively addressing social sustainability necessitates the engagement of experts from relevant domains (e.g., the social sciences and humanities) and, when explored in a specific deployment context, local stakeholders who will be directly or indirectly impacted by the project. b Non-exhaustive list in the example paper.
629df71e97e76a377cc7f06e
28
After defining the problem space, establishing simulation algorithms, and selecting the methodology to characterize the system sustainability, QSD can be executed with system simulation as the first step. Multiple types of tools can be leveraged in system simulation to generate the system inventory. On the simpler end of the spectrum, designs from existing literature can be used and system inventories can be scaled from the literature design (e.g., scaled based on key flowrates). In this case, spreadsheets or programming languages (e.g., R, Python ) can be used for inventory scaling, and this process can be automated through spreadsheet built-in functions (e.g., Microsoft Excel Macros 216,217 ) or add-in applications (e.g., Crystal Ball ®218 , SimVoi ®219 ). Though straightforward, this strategy relies on existing designs with limited capacity to accurately reflect the intrinsic correlations between QSD inputs and system sustainability. Additionally, as mass and energy flows within the system would be scaled using generic algorithms (often linear correlations) rather than being solved analytically or numerically to convergence, the results could be inaccurate as values of the QSD inputs diverge from the existing design. Therefore, this strategy is generally only appropriate for very preliminary evaluations or for the evaluation of relatively mature technologies for which well-tested scaling algorithms are available.
629df71e97e76a377cc7f06e
29
On the opposite end of the spectrum, high-fidelity, commercial process simulators (e.g., GPS-X TM , 220 Aspen Plus ® , 221 SuperPro Designer, 222 AnyLogic 223 ) with rigorous design and process algorithms have been widely used for system design and simulation, especially for largescale, multi-unit process systems (e.g., wastewater treatment trains, 224,225 bio-or chemical refineries ). Supported by commercial companies, these simulators are well-tested and capable of solving complex algorithms, allowing deeper investigation into the dynamics among the QSD inputs, system inventory, and sustainability. However, because of their proprietary nature, these simulators are often less accessible and transparent. They may have also limited flexibility to evaluate early-stage technologies due to the lack of corresponding unit operations and associated design and process algorithms. Moreover, many of these simulators have no or limited capacity for advanced statistical analyses (e.g., global sensitivity analysis beyond correlation and regression methods). Thus, they often cannot independently fulfil the objectives of QSD studies.
629df71e97e76a377cc7f06e
30
Notably, in recent years, there has been a push to develop open-source tools for design and simulation of various systems in fields such as water and wastewater, Their open-source nature allows these tools to be freely used and continuously developed by the general community, and some of these tools have integrated (or are built with extendable capacities for) advanced statistical analyses, sustainability characterization, and multi-objective optimization (e.g., ). These features are especially beneficial for early-stage technologies which have not been included in commercial simulators and with higher levels of uncertainties due to the lack of data. However, despite their advantages, these open-source tools can be difficult to adopt (e.g., due to the lack of graphic user interfaces) and challenging to maintain without a central supporting organization. Therefore, laying the groundwork for community-led platforms is vital to the long-term success of these tools, which could be realized via collaborative platforms (e.g., GitHub, 239 GitLab 240 ) and preparation of easy-to-access documentation.
629df71e97e76a377cc7f06e
31
Characterization of sustainability can be performed either within or outside the system simulation platform. For economic sustainability, capital and operating expenditures can often be extracted from the commercial simulators, and some of these simulators have functionalities for profitability and cash flow analyses. However, these built-in functionalities often have little or no support for users to adjust parameter values (e.g., equipment and chemical costs). Therefore, additional analyses are often conducted outside these simulators to explore an expanded QSD problem space (e.g., using spreadsheets ).
629df71e97e76a377cc7f06e
32
With regard to environmental sustainability, dedicated LCI databases (e.g., ecoinvent 242 ) are often used to translate the system inventory into the system LCI, after which different LCIA methods can be used to quantify the environmental sustainability of the system. To streamline this process, environmental sustainability is often characterized using specific tools (e.g., SimaPro, 243 GaBi, 244 openLCA, 245 Brighway2, 246 GREET 247 ). These tools generally have embedded LCI databases and/or allow the importing of external LCI databases, and some of them are equipped with built-in functions or mechanisms (e.g., via inter-process communication 245 ) that allow the user to account for uncertainty.
629df71e97e76a377cc7f06e
33
To characterize human health indicators, relevant system inventory data (e.g., pathogen concentration), exposure assessment algorithms, and dose response models can be compiled using generic computation tools (e.g., spreadsheet, programming languages). Particularly for QMRA, recommendations by the Center for Advancing Microbial Risk Assessment 248 can be followed to select pathogen-specific dose response parameters based on the type of the model (e.g., exponential, approximate beta-Poisson).
629df71e97e76a377cc7f06e
34
For the characterization of social sustainability, no universal tools have yet been developed for its evaluation. Though some LCA tools (e.g., SimaPro, openLCA, SEEbalance ) include databases such as the Social Hotspots Database 251 for SLCA, the primary goal of this approach is to evaluate social risks along a supply chain without including stakeholder input. These tools can facilitate indicator evaluation, but engagement methods described in this work should be followed to more holistically integrate social sustainability into QSD by incorporating stakeholder input across the indicator selection, evaluation, and prioritization steps (Section 4.4).
629df71e97e76a377cc7f06e
35
Because a range of tools may be used for system simulation and sustainability characterization, data organization and transfer between these tools can be challenging due to the heterogeneity in data requirements (e.g., file format), despite the fact that the same system inventory is used. For example, for a wastewater-based microalgal cultivation system that recirculates water internally, Aspen Plus ® was used for system simulation, after which the system inventory was exported for TEA in a spreadsheet and LCA using SimaPro. The analysis of other systems has followed a similar workflow, transferring data among multiple tools used for different steps of QSD. Although programming languages can be used to facilitate data formatting and transfer, this nonetheless presents challenges in QSD execution when system simulation and sustainability characterization need to be repeated thousands of times or more to consider uncertainty (Section 5.2). Alternatively, there are ongoing efforts in tool development to integrate system simulation and sustainability characterization on a single, opensource platform (e.g., for sanitation and resource recovery systems, 230 biorefineries 215 ). Though still at the early stage, these tools offer the opportunity for streamlined QSD execution with much greater flexibility, better consistency, and higher computational efficiency, all of which are critical to QSD and its application.
629df71e97e76a377cc7f06e
36
At its core, QSD relies on an aggregated, computational system model to represent the behaviors of physical technologies. This system model is compiled by connecting the algorithms used in system simulation and sustainability characterization to predict the quantities of interest across a range of QSD inputs. Prediction uncertainty of this system model can arise from multiple sources:
629df71e97e76a377cc7f06e
37
(i) model structure, (ii) system non-deterministic behaviors, (iii) numerical error, and (iv) model inputs and parameters. Among these sources, the uncertainty of model structure can be considered by empirically comparing or aggregating the predictions of multiple viable models; the uncertainty due to a system's non-deterministic behaviors can be characterized by incorporating stochastic elements into a deterministic model; and the numerical error in calculating model results is usually much smaller than other sources of uncertainty. Most essential to QSD is the uncertainty from model inputs and parameters (i.e., QSD inputs), which can be aleatory, epistemic, or both. Aleatory uncertainty (also called variability or irreducible uncertainty) arises from randomness or variations due to "hidden" factors that are not included in the model. Epistemic uncertainty (i.e., true uncertainty, reducible uncertainty), on the other hand, derives from a lack of knowledge of the "true" values. Uncertainty associated with decision variables are inherently aleatory as their values are subject to the choice of the designer or the operator, and probability distributions can be used to reflect the designer's or operator's preference within the ranges of feasible decisions. In contrast, uncertainties associated with technological parameters and contextual parameters can be caused by randomness (aleatory), lack of knowledge (epistemic), or both. For example, a location's average temperature on a certain date in the future (a contextual parameter) is random to some extent (i.e., aleatory), and at the same time, practitioners may have imperfect knowledge of its "true" range of variability based on historic data (i.e., epistemic). For LCA, uncertainty is also distinguished based on whether their sources are in the foreground (e.g., the quantity of stainless steel that comprises an impeller) or background (e.g., the LCI of the stainless steel) systems.
629df71e97e76a377cc7f06e
38
Both the epistemic uncertainty in technological parameters and the desirable ranges of decision variables correlate with the maturity of a technology. As a result, technologies in their early stage tend to have greater levels of uncertainty. Therefore, an essential task in QSD is to properly synthesize uncertainties from different sources and propagate them through the system model, thereby assessing the overall uncertainty in the prediction of system sustainability.
629df71e97e76a377cc7f06e
39
To quantitatively characterize the overall uncertainty introduced in QSD results, Monte Carlo methods are usually applied by using stochasticity (i.e., randomness) to solve problems that are deterministic in nature. In the context of QSD, the first step is to select a subset of QSD inputs to be included as input variables in the uncertainty analysis, after which their probability distributions are defined (e.g., through probability density functions) to quantitatively represent their uncertainties. While including more QSD inputs in the Monte Carlo simulation will likely provide more accurate characterization of the overall uncertainty of QSD outputs (i.e., sustainability indicators), it will also increase the required sample size and thus computational time. To address this, sensitivity analysis can be used to identify input variables that are key drivers of system uncertainty (Section 5.3). Notably, as the selection of uncertain input variables can affect results of the sensitivity analysis, uncertainty and sensitivity analyses may be performed iteratively to narrow the input variable pool. Similarly, selection of the probability distributions should be based on the available information on the QSD inputs (e.g., abundance
629df71e97e76a377cc7f06e
40
Next, a set of samples (i.e., the sample matrix) are generated from the joint probability distribution of all selected input variables to represent the entire problem space. When all input variables are independent of each other, generating samples from their joint probability distribution is equivalent to sampling separately from the probability distribution of each individual model input. Otherwise, correlations between input variables need to be characterized to properly define and generate samples from their joint probability distribution. Different techniques can be used for sample generation, and the selection of technique is often determined by the types of uncertainty and sensitivity analyses of interest (Section 5.3). In practice, Monte Carlo sampling (i.e., random sampling) generates samples by repeatedly drawing random values from a defined distribution using random number generators. Alternatively, low-discrepancy, quasi-random sequences (e.g., Latin hypercube sampling, Ideally, aleatory uncertainty and epistemic uncertainty should be propagated through the model separately. However, such separation is costly and not always possible. Nonetheless, this distinction should be reflected in the interpretation of the uncertainty and sensitivity analysis results. For example, when the prediction uncertainty of system performance is largely attributed to a lack of knowledge of the true value of a technological parameter, research should focus on reducing the epistemic uncertainty of this parameter (e.g., through more accurate experimental measurement to narrow the possible range of its true value). On the other hand, if decision variables are found to be the driver of the variation in the predicted system performance, development of the technology should focus on optimizing the design and operation decisions to limit the aleatory uncertainty.
629df71e97e76a377cc7f06e
41
With uncertainty analysis, one can characterize sustainability indicators, identify the drivers of system sustainability, and set targets for future research and development. To To identify the drivers of system sustainability, values of key sustainability indicators (e.g., MSP) can be attributed across different sources (e.g., equipment, material, labor) based on their contributions (Figure ). Given that sources with the largest contributions exert the most influence toward indicator values, they may be prioritized in RD&D to advance system sustainability. For example, because feedstock costs can account for roughly half (or more) of the MSP of algal biofuels, strategies to reduce the production cost of algae (e.g., development of high-productivity strains) should be prioritized. Finally, uncertainty analysis can also be conducted to set targets for parameter values, which are particularly relevant to early-stage technologies (Figure ). This can be achieved by and could deviate from the typical range shown here.
629df71e97e76a377cc7f06e
42
To understand what is driving the observed uncertainty in sustainability indicators for the system, sensitivity analysis is often used to apportion the uncertainty in the model outputs to different sources of uncertainties in the model inputs (i.e., the "effects" of the inputs ). The results of sensitivity analyses are represented as input sensitivity indices for each output of interest. In other words, for each model output, all uncertain model inputs will have one or more sensitivity index values to represent their relative importance to that output's uncertainty (Figure ). Although sensitivity analysis can be either global or local, global methods are recommended in QSD due to their more robust and informative nature, as they evaluate the effects of input variables across the entire problem space while the local methods only focus on the vicinity of a base point. In QSD, global sensitivity analysis can be applied to facilitate research prioritization (e.g., to achieve sustainability targets), model improvement (e.g., to simplify models and reduce computational intensity), and data collection (e.g., to narrow probability distributions). ) are the least computational intensive and sufficient for monotonic models, where "monotonic" indicates that the model output increases or decreases monotonically with individual model inputs over their ranges of uncertainty (e.g., water treatment plant annual material cost always increases with the increasing unit price of an input chemical). These methods are often performed in tandem with uncertainty analysis (i.e., using the results generated from the uncertainty analysis) as they do not require specific sampling techniques, and have been widely used sustainability analyses to highlight the significance of technological advancement or data collection (e.g., ).
629df71e97e76a377cc7f06e
43
Morris One-at-A-Time 293 ) are used to identify non-influential inputs that can be fixed at given values within their uncertainty ranges without significantly reducing the output variance. These methods are typically used in a qualitative manner (i.e., values of the sensitivity indices are only for comparative purposes), and smaller sample sizes are typically used to reduce model complexity by fixing non-influential inputs. In one study, for example, the Morris method was used to determine the most important individual and groups of input variables in estimating the yield of urban water supply systems. Unlike screening methods, variance-based methods (e.g., the Sobol method) decompose the variance in model output as the sum of effects associated with the model inputs. Due to their quantitative nature, variance-based methods are more computationally intensive than screening methods, especially when the quantification of interaction effects is desired. Thus, variance-based methods are often conducted for models with relatively small numbers of inputs or after fixing non-influential inputs. As an example, a variance-based sensitivity analysis was performed in one study to determine that 97% of the variance of a location's suitability for a hazardous waste landfill facility was jointly induced by only three variables, thus allowing the original model to be greatly simplified without compromising its accuracy. 285
629df71e97e76a377cc7f06e
44
For complex systems with a large number of inputs and expansive problem space, scenario analysis can be used to draw conclusions for certain sets of inputs with specific values (i.e., scenarios, Figure ). Each scenario should be based on a coherent and internally consistent set of assumptions about key driving forces (e.g., demographics and socio-economic development) and the relationships among these key driving forces. Scenario analysis is often placed in a future setting, and scenarios based on different sets of assumptions are often compared to inform decision-making. For example, to mitigate GHG emissions and curb global warming within 1.5°C above the pre-industrial level, different scenarios, each consisting of varying technologies, have been evaluated for decarbonization. Similar applications of scenario analysis can be found in other topics (e.g., water and wastewater treatment, renewable energy, waste management 232 ), or for a certain technique (e.g., using different scenarios of electricity mix in LCA to assess the environmental sustainability of electric vehicles ). In essence, scenario analysis divides the entire problem space into discrete, representative sub-spaces. It reduces the dimensionality of the system (fewer or less uncertain inputs) and simulation needs, while providing a detailed understanding of distinct alternatives that may be of particular interest for policymaking or implementation in specific contexts.
629df71e97e76a377cc7f06e
45
When the primary objective of QSD is to inform technology deployment, spatial analysis is often used to account for site-specific parameters that can influence the sustainability of systems (Figure ). At its core, spatial analysis can be considered as a special category of scenario analysis where each deployment site is a scenario, and the implications of deployment site are reflected through the values of contextual parameters that are site-specific. To facilitate spatial analysis, geospatial data that link locality-specific contextual parameters with location information are collected, and they can be further combined with temporal information to reveal the evolution of attributes over time. These geospatial data may include physical information (e.g., geological properties, 285,303,304 distances, 305,306 existing infrastructure ), policies, 304 cultural preferences, 291 and any other contextual parameters that could serve as QSD inputs (e.g., energy and water unit impacts, 307,308 costs and impact characterization factors 309 ). As contextual parameters can be particularly important for the human health and social dimensions of sustainability that are highly site-specific, measures should be taken to ensure the representativeness of these values (e.g., use high stakeholder input and influence methods discussed in Section 4.4). To address spatial considerations, GIS is often used to capture, organize, calculate, and integrate multiple layers of spatial information into QSD (e.g., assessing spatial co-location of recoverable nutrients and agricultural demands for deployment of nutrient recovery sanitation systems ). Further, as uncertainties in these data (e.g., from different resolutions 312 ) can directly affect the conclusions of sustainability analyses, uncertainty analysis is often incorporated in spatial analysis and tailored to the data (e.g., based on the data structure 313 ).
629df71e97e76a377cc7f06e
46
As society endeavors to pursue more sustainable water, sanitation, and resource recovery systems, a range of new technologies are required to replace existing infrastructure that was built to accomplish narrowly defined functions without considering externalities. To navigate through the vast opportunity space ahead of us, sustainability analyses must be integrated throughout the RD&D of technologies to ensure a trajectory toward sustainability. In support of this integration, we reviewed existing literature on the sustainability analyses of water/wastewater and broader environmental technologies and synthesized our findings into a structured methodology -QSD. We introduced QSD as a methodology that addresses four critical challenges in sustainability analyses that had been largely overlooked in the existing literature: Finally, the results of QSD can be used to inform decision-making, including in structured methodologies such as MCDA (e.g., quantified sustainability indicators can be used to evaluate, compare, and recommend technologies based on stakeholder priorities as in ). Tools and methods developed in the discipline of decision analysis can in turn be used in QSD for indicator selection (e.g., the MCDA Index Tool, the Delphi method 319 ) and prioritization (e.g., the analytical hierarchy process [AHP], 320 the preference ranking organization method for enrichment evaluations [PROMETHEE] approach 321 ). Moreover, with objective functions established in MCDA, tools developed for sustainability analyses can potentially be expanded to automate the optimization of system design in early-stage research and development , or to support stakeholder engagement in transparent planning and design processes. Overall, QSD offers the potential to reveal the sustainability implications of technology innovations under a given context, thereby enabling stakeholders at all levels to understand and make informed decisions about the multidimensional effects of technologies, ultimately contributing to the society's transition toward sustainability.
634cac1d86473a8255156d2a
0
When Louis Pasteur introduced the concept of molecular chirality in his groundbreaking work in 1848, he also questioned what might be the forces for the synthesis of the first asymmetric natural products. A variety of strategies have been introduced over the last eighty years for the controlled synthesis and eventual separation of pure enantiomers, which provides a promising answer for the arising of life in the prebiotic stage. Unfortunately, still, no consensus has been reached to uncover the most likely scenario of how the mirror symmetry is initially broken. Inorganic solids such as minerals, metals, and semiconductors have invoked a popular research interest because they can obtain chirality from surface modification or external fields, and thus endow bioorganic molecules with homochirality. Remarkably high activity and selectivity in enantiospecific separation and asymmetric reactions were reported on various solid catalysts such as Au, Pd, and Fe3O4. This can be explained by either electronic structure chirality or crystal structure chirality. The former assumes that the spin-polarized electrons can be viewed as chiral particles longitudinally and could interact with adsorbed molecules according to the chiral-induced spin selectivity (CISS) mechanism. The latter represents the chirality of the crystal surfaces when their Miller indices (h k l) obey h ≠ k ≠ l ≠ k and h × k × l ≠ 0, for example, the (643) surface of cubic Pt. However, little is known about the chemical effects of pure inorganic compounds with bulk chirality crystal structures.
634cac1d86473a8255156d2a
1
Intermetallic compounds with the space group P213 such as PdGa, RhSi, and CoSi are promising candidates for the probing of chirality due to their exotic chemical and physical properties. Taking PdGa as an example, at least three catalogs of chirality can be found in this family of compounds as a result of their noncentrosymmetric bulk structure. a) Bulk chirality. Pd and Ga atoms are helically stacked along the [111] polar direction, suggesting the existence of two enantiomers with reversed handedness of helices. b) Surface chirality. For the threefold symmetric (111) surface of the B20 compounds, although the topmost layer of Pd atoms is achiral, the Pd and Ga trimers in the 2 nd and 3 rd outmost surface layers could induce surface chirality and then influence the surface properties of the outmost Pd layer. 17 c) Electronic structures chirality. Chiral fermions and chiral Fermi-arc surface states with reversal velocities are experimentally confirmed, which is closely related to the chirality of bulk structures. Thus, it is not surprising that B20 compounds such as PdGa have been explored a long time ago as heterogeneous catalysts. The chiral Fermi-arc surface states serve as catalytic active sites and provide stable electron baths for redox reactions. More importantly, enantioselective adsorption and elimination reaction towards prochiral molecules has been achieved on the (111) surface due to the different surface termination and the surface chirality originating from the second outmost surface layers. With these experimental facts in mind, it is interesting to ask the question of whether the bulk crystal chirality initiates the enantioselective reactions.
634cac1d86473a8255156d2a
2
In this work, we proposed that the intrinsic OAM polarization induced by the bulk chirality can drive the enantioselective reactions. We successfully obtained two PdGa bulk crystals with opposite chiralities and then studied their response toward the oxidation of DOPA enantiomers. It is found that the electro-oxidation of L-DOPA and D-DOPA is strongly dependent on the bulk chirality of PdGa crystals. The difference in oxidation behaviors can be explained by the classical thermodynamic adsorption theories, with the adsorption energies of L-and D-DOPA being different at the same crystal surface of PdGa. To understand the origin of adsorption differences, we performed a theoretical investigation on the PdGa crystals and found the existence of strong OAM polarizations, with their signs being chiral-dependent. Considering the mirror-symmetric orbital polarization of O 2p orbital in DOPA, we believed that the enantioselective adsorption of DOPA is driven by the polarization mismatch between the DOPA adsorbates and PdGa substrate. Our work suggests that the enantioselective reactions could happen on pure crystals without any surface modification, which provides a potential strategy for asymmetric synthesis on pure inorganic crystals or minerals.
634cac1d86473a8255156d2a
3
Single crystallinity of the as-grown PdGa crystals was first analyzed using a white beam backscattering Laue X-ray diffraction at room temperature. Both the PdGa crystal batches show sharp and well-defined Laue spots, which can be indexed with a single pattern (Fig. and). This illustrates the excellent quality of the grown crystals. Then rigorous single-crystal X-ray diffraction was performed to analyze the structural chirality, Flack's parameter, etc. as explained in previous references . The identified enantiomers of form-A and form-B of the PdGa crystals are as defined in Ref. . Two of the crystals obtained from the same batch were selected and served as the working electrode for DOPA enantiomeric recognition. The scanning electron microscope (SEM) images recorded with secondary and backscattered electrons suggest the high homogeneity of the two crystals (inset Fig. , Fig. ).
634cac1d86473a8255156d2a
4
The molar ratio of Pd and Ga is determined to be 1 according to the elemental analysis (Fig. ). The entire refined crystallographic information along with final results are compiled in standard .cif format and can be retrieved from (), using the deposition numbers CSD-1999938 to CSD-1999942. The detailed crystal structures were displayed in Fig. . Both PdGa-A and PdGa-B crystals are crystallizing in the FeSi type of structure without the mirror and inversion symmetries. In the same as other B20 chiral compounds, the intrinsic bulk chirality of PdGa is determined by the arrangement of Pd and Ga chiral motifs along the [111] direction. For A-type chirality PdGa crystal, Pd atoms are anti-clockwise stacked along the polar [111] direction, while Ga atoms are clockwise arranged. For B-type chirality, Pd and Ga atoms are stacked with an inverse sequence. Here we must point out that the definition of chirality for PdGa is different from the previously reported method, , where the surface chirality is derived from the different (111) and (1̅ 1̅ 1̅ ) termination that cutting from a same crystal (Note: The chirality comes from the surface termination, rather than bulk symmetry).
634cac1d86473a8255156d2a
5
As a proof-of-principle demonstration, DOPA molecules with different chirality are selected for the electrochemical recognition measurements. The oxidation behavior of DOPA molecules has been well documented at the surface of various solid electrodes, which makes the understanding of our data easier and more consolidated. At first, cyclic voltammetry (CV) curves were recorded to confirm the happening of DOPA oxidation with PdGa-A as the working electrode. In the absence of L-DOPA (Fig. ), the broad oxidation peak between 0.6 and 1 V vs. Ag/AgCl can be attributed to the formation of Pd-O or Pd-OH bonding. The strong reduction peak between 0 -0.3 V can be indexed to the strong H adsorption process at the surface of Pd-based compounds. When 4 mM of L-DOPA was introduced into the solution, a strong oxidation peak is observed at around 0.78 V, suggesting the oxidation of L-DOPA at the crystal surface. Other investigated samples including PdGa-B and Pt films exhibit the same behaviors towards DOPA oxidation (Fig. ). As a comparison, 2H-MoS2 single crystals can not drive the oxidation reaction because the (001) basal planes are catalytically inert to electrochemical reactions. As a result, no oxidation peaks are found under the same measurement condition in the presence of DOPA (Fig. ).
634cac1d86473a8255156d2a
6
We first recorded the DPV curves with PdGa-A crystal in the electrolyte with and without L-DOPA (Fig. ). One can see clearly the strong and well-defined DOPA oxidation peak at around 0.72 V. On the other hand, there are no significant changes for the peak corresponding to the formation of Pd-O and Pd-OH bonding at the lower potential of about 0.51 V. These features are fully consistent with the CV scanning results. The same results are found when the PdGa-B crystal was used (Fig. ). This uncourtly confirmed the oxidation of DOPA at the surface PdGa crystals. Next, we measured the chirality-dependent oxidation of DOPA with PdGa-A and B crystals, respectively. As summarized in Fig. , the oxidation currents for L-DOPA and D-DOPA are inversed at the surface of PdGa-A and PdGa-B crystals. In other words, the oxidation of L-DOPA at the surface of PdGa-A crystal is more favorable than D-DOPA, while PdGa-B crystal prefers the oxidation of D-DOPA. These results suggest that the oxidation of DOPA enantiomers is strongly dependent on the chirality of the PdGa crystals because all the other experimental details are kept the same. As an additional testimony, DOPA oxidation behaviors at the surface of archival Pt foil were recorded (Fig. ). There is no sign of enantiomer selectivity from the identical oxidation currents, which is consistent with previous studies. (Note: the potentials for DOPA oxidation are strongly dependent on the electrode properties and measurement conditions. The theoretical standard electrode potential of DOPA oxidation is 0.75 V vs RHE. The experimental values under acidic conditions are 0.76 V for Pt, 0.82V for glassy carbon, 0.85 V for Au electrode, 0.89 V for homocysteine modified Au electrode, , and 1.01 V for 2,20-Bis [2-(5,20-bithienyl)]-3,30-bithianaphthene oligomers coated glassy carbon ) Although the discrimination between two DOPA enantiomers is available with the bulk PdGa single crystals, the differences are not that so pronounced. This may be caused by two reasons. First, side reactions such as PdGa oxidation and radical adsorption happen in the potential window of DOPA oxidation (0.6-1 V vs RHE), which will influence the resolution. Second, besides the main surface of (111), other crystal surfaces and edges are inevitable for the investigated PdGa bulk crystals. This will make the determination of the DOPA oxidation currents more challenging. Wo notice that for the cubic B20 binary crystals with space group 198, the threefold rotation symmetry is along the [111] direction, endowing the (111) surface with exotic electronic properties such as the circular photo-galvanic effect. Thus, it is expected that (111) surface polar surface should have the maximum response towards chiral molecules' adsorption and reaction. With this in mind, one may expect enhanced enantioselectivity by only exposing the (111) chiral surface area. High-quality single-crystalline RhSi (111) thin films with a thickness of 40 nm were synthesized (Fig. ). CV investigation confirmed the oxidation of both L-and D-DOPA (Fig. ). As expected, DPV curves exhibited well-defined DOPA oxidation peaks at around 0.6 V, suggesting the high enantioselective recognition ability at the polar (111) surface (Fig. ).
634cac1d86473a8255156d2a
7
Enantiomeric oxidation at the surface of solid crystal surfaces is a typical homogenous reaction process. Thus, the differences in oxidation selectivity should be reflected in the adsorption energies. To answer this question, we investigated the adsorption behaviors of DOPA enantiomers at the (111) crystal surface by taking the PdGa-A crystal as a model. The three-dimensional structures of L-DOPA and D-DOPA are displayed in Fig. . Under acidic conditions, both enantiomers have a hydrophobic catechol group, a positively charged amine group, NH3 + , and a negatively charged carboxyl group, COO -. In the presence of a solid surface, the preferred adsorption sites are the highly charged cationic metals due to the cation-π interactions according to previous studies. This is further confirmed by the optimized adsorption geometry of the PdGa-A (111)-DOPA pairs using density functional theory (DFT) in this work. Both L-DOPA and D-DOPA are firmly adsorbed onto the Pd sites via the O atoms from the carboxyl group (Fig. , Fig. ). Such an adsorption geometry is consistent with the surface electronic structure of PdGa (111), where the Pd site is positively charged with higher valence states than Pd metal. The bonding lengths of Pd-O are 2.24 and 2.29 Å for L-DOPA and D-DOPA, A (111). The energy difference is 48 meV, which is higher than the thermal energy at room-temperature (~25 meV). f. Charge transfer analysis at the interface of PdGa-A/DOPA pairs. Electrons are transferred from PdGa to DOPA, which is 0.137 𝑒 -for L-DOPA and 0.006 𝑒 -for D-DOPA. respectively, which fall in the range of hydrogen bonding, but much longer than the bonds between metals and oxygen intermediates in other catalysis reactions such as oxygen evolution. To give another perspective on the bonding strength of Pd-O, the binding energies between PdGa and DOPA are calculated (Fig. ). As expected, the binding energy between the PdGa-A/L-DOPA pair is 0.063 eV, which is higher than that for the PdGa-A/L-DOPA pair (0.112 eV). The adsorption energy difference between the two DOPA enantiomers is 48 mV, which is well above thermal energy at room-temperature (~25 meV). Furthermore, we found more electrons are transferred from PdGa to L-DOPA (0.137𝑒 -) than to D-DOPA (0.006𝑒 -) (Fig. ). These quantitative calculation results undoubtedly confirmed that the adsorption of L-DOPA enantiomer is more favorable than the D-DOPA at the surface of PdGa-A crystal, which leading to the observed higher oxidation current of L-DOPA. Now we try to answer the question of why the bond formation is more favorable for L-DOPA at the PdGa-A surface. One of the promising answers is that the spin polarization of the electrons from PdGa may differ from the one from the adsorbates, which will induce differences in electron transfer efficiencies and bonding energies. With this in mind, we calculated band dispersion and extracted the spin polarization ( 𝑆𝑧 ) of the spin-angular momentum from the phase-dependent atomic-orbital projection of the Bloch wavefunction for both PdGa-A and PdGa-B (Fig. ). We found six spinspitted non-degenerate bands along the symmetry line Γ-R around the Fermi level, merging into three two-fold degenerate bands at the time-reversal invariant point Γ. Although the spin-resolved data confirm an opposite sign of 𝑆𝑧 at +𝑘 𝑦 and -𝑘 𝑦 for the sub-spin splitting bands, all components of spin polarization vanish when summed up, thus vanishing the spin angular polarization along the [111] direction. Therefore, the role of spin polarization and spin-selected electron transfers can be neglected in our case.
634cac1d86473a8255156d2a
8
Different from the SAM, the spin-split branches suggest a parallel OAM between orbital polarization and the momentum, with the ( 𝐿𝑧 ) always carrying opposite signs at the +𝑘 𝑦 and -𝑘 𝑦 points, respectively. Although the OAM energy is generally much weaker than SAM, it is enough to interact with right or left circularly polarized light to give reasonable circular dichroism (CD) signals. Moreover, our calculations show that L-DOPA and D-DOPA carry opposite OAM along the same momentum direction when projecting to the LUMO orbital (O 2p) indicating that the movements of electrons along the [111] direction are following different helical wavefronts(Fig. ). This results in the change of the lowest unoccupied molecular orbital (LUMO) when forming bonding with PdGa. Most importantly, the energy shift of LUMO is closely related to the sign of OAM or the chirality of the PdGa. This will change the energy gap between the highest occupied molecular orbital (HOMO) and LUMO, resulting in the chirality-dependent binding energies of DOPA when forming antibonding states with O 2p orbitals. This will finally result in the enantiomeric adsorption and oxidation depending on the compatibility of the OAM with each other.
634cac1d86473a8255156d2a
9
To conclude, it is demonstrated that enantiomeric recognition can be initiated by the intrinsic OAM polarization in the B20 family chiral compounds. This was experimentally confirmed by the different oxidation kinetics of DOPA enantiomers at the surface of PdGa (111) single crystals. Adsorption and bonds formed between the DOPA enantiomers and the chiral crystals are strongly dependent on the compatibility of the OAM with each other. Our study highlights the importance of OAM polarization as a promising driven force for enantiomeric recognition. It is hoped that this work will stimulate further studies of other solid states crystals (minerals), where more significant and pronounced enantiomeric recognition, enantiomer separation, and asymmetric catalysis reactions can happen.
60c74b29469df471e3f43e7b
0
Aluminium nitride (AlN) is a widely used semiconductor material in several electronic devices due to its direct wide bandgap of 6.2 eV 2 . The conventional method for depositing epitaxial films of AlN is chemical vapor deposition (CVD) using trimethylaluminium (TMA), Al2(CH3)6 and ammonia, NH3, at temperatures, typically, above 1000 C . This limits the applications for AlN to substrates and underlying film materials that can withstand such temperatures. An alternative low temperature deposition route is atomic layer deposition (ALD), which is a timeresolved form of CVD where the Al and N precursors are pulsed into the deposition chamber sequentially, separated by inert gas pulses. This gas pulsing makes the process solely depend on surface chemical reactions and omits gas phase chemical reactions, which typically need high temperatures. ALD of AlN have previously been reported using TMA with NH3 both via thermal , NH3 plasma7 ,8,9,10 and N2 plasma routes. Plasma processes can lead to crystalline and conformal AlN films at temperatures <300 °C. To obtain a crystalline AlN film via the thermal route, a temperature >375 °C is needed. The TMA molecule is reported to decompose above 330 °C12 by breaking of one of the Al-C bonds, forming dimethyl aluminium (DMA) and a methyl group at temperatures 500 °C. Thermal ALD of AlN typically contain 5-10 at.% C depending on the deposition temperature, making thermal ALD routes not ideal for depositing electronic grade AlN. Plasma ALD is also associated with high levels of carbon impurities as atomic hydrogen, produced in the plasma discharge, can induce a chemistry trapping carbon impurities in the film by abstracting H2 from surface methyl groups. Deposition of crystalline AlN with low levels of carbon is thus dependent on an efficient carbon cleaning surface chemistry.
60c74b29469df471e3f43e7b
1
Herein, we report a low temperature, ALD-like, CVD approach depositing AlN using TMA and NH3 delivered in separate pulses at 480 °C. We show that adding an extra gas pulse, with the gas flow perpendicular to the substrate surface between the TMA and NH3 pulses, leads to a drastic decrease in C content and increases the crystalline quality of the films. Quantumchemical density functional theory and kinetic modelling suggest that the lower carbon content in the films is attributed to the prevented re-adsorption of desorbing methyl groups to the AlN surface.
60c74b29469df471e3f43e7b
2
The AlN films were deposited in a Picosun R-200 ALD system with a base pressure of 400 Pa and continuous N2 (99.999 %, further purified with a getter filter to remove moisture) flow through the deposition chamber. The chamber walls and the substrate holder were heated using separate heating systems. Si (100) wafers without further cleaning were cut into 15x15 mm 2 pieces and used as substrates. The substrates were loaded into the deposition chamber without using a load-lock. Commercially available TMA (Pegasus Ltd, Alpha grade) in a stainless-steel bubbler was used at room temperature with 100 sccm N2 as carrier gas. NH3 (AGA/Linde, 99.999 %) was used as the nitrogen source in the process. The TMA pulse time was set at 0.1s with 6s purge and the NH3 pulse time was set to 12s with 6s purge if nothing else is stated. H2 (99.999 %, further purified with a getter filter to remove moisture), N2 or Ar (99.999 %, further purified with a getter filter to remove moisture) was used as a cleaning pulse between the TMA and NH3 pulse. It should be noted that the flow of these cleaning pulses (150 sccm) was perpendicular to the substrate surface compared to horizontal for the purge pulses. The difference between the cleaning pulse and the purge, beside the flow direction, is also the flow rate where the purge gas (N2) had a total flow of 500 sccm (continuous flow into the reaction chamber) distributed over the six precursor gas channels, were five gas lines had a flow of 60 sccm and one line, acting the main N2 line into the chamber, had a flow of 200 sccm. While the cleaning pulse had a flow of 150 sccm independent of which gas was used and the plasma gas channel was used located above the chamber.
60c74b29469df471e3f43e7b
3
The crystallinity of deposited films was studied using a PANalytical EMPYREAN MRD XRD with a Cu-anode x-ray tube and 5-axis (x-y-z-v-u) sample stage in grazing incidence (GIXRD) configuration with a 0.5 o incident angle. PANalytical X'Pert PRO with a Cu-anode tube and Bragg-Brentano HD optics was used for X-ray Reflectivity (XRR) mode to measure the thickness of the films. From the XRR measurements the software PANalytical X'Pert reflectivity was used to fit the data using a two-layer model, AlN/substrate. A LEO 1550 Scanning electron microscopy (SEM) with an acceleration energy of 10-20 kV was used to study the morphology of the films. Kratos AXIS Ultra DLD X-ray photoelectron spectroscopy (XPS) equipped with Ar sputtering was used to analyse the composition (5% error margin of the atomic percentage) and chemical environments in the films. The composition of the films was obtained after clean sputtering the surface with Ar beam energy of 0.5 KeV with a sputtering area of 3 mm 2 for 600 s. CasaXPS software was used to evaluate the data. Gaussian-Laurentius functions and Shirley background were used to fit the experimental XPS data.
60c74b29469df471e3f43e7b
4
Quantum-chemical density functional theory (DFT) computations were applied to study the mechanism of surface desorption of methyl groups on the AlN surface. The calculations were performed using the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) functional together with Grimme D3-empirical dispersion correction 17 using the Vienna Ab initio Simulation Package (VASP) . For the representation of pseudopotentials for Al, N, C and H the projected augmented wave (PAW) method provided by that package was used, where in the surface studies a fractional charge of 0.75 was used for the H atoms bonded to the N atoms at the bottom (opposite side) terminated surface layer to facilitate the saturation of dangling bonds. The hexagonal unit cell of the bulk AlN structure was first derived by energy minimization with respect to the lattice parameters and coordinates using a 777 grid of gamma-centred k-points, resulting in a = 3.114 Å and c = 4.995 Å. Previous experimental determination of the lattice parameters show that a = 3.11131 Å and c = 4.98079 Å which correlates well with the DFT calculations. Then a model of a 2-dimensional surface was constructed by cutting out a slab of 5 (Al, N) layers orthogonal to the c-direction (corresponding to a vertical height of about 10.6 Å from a "top" Al to a "bottom" N), with a hexagonal surface (2-dimensional) unit cell with asurf = 3.114 Å (and with the direction of the c-axis reversed compared to the crystal cell c-axis), see Fig. . The length of the c-axis was increased to 40.0 Å to create empty space between the slabs. A cell with the surface cell axes (asurf, bsurf where bsurf = asurf) doubled were used in the calculations, which led to 4 Al atoms (or "surface sites") with a dangling bond that can be saturated by CH3 being present at the topmost surface. The atomic coordinates of the slabs (with or without adsorbents) were geometry optimized using a 331 k-point grid. Free energies were calculated from the computed vibrational mode frequencies of the optimized structures (Table ).
60c74b29469df471e3f43e7b
5
The kinetics of the surface adsorption and desorption processes were simulated based on the quantum-chemical results. The adsorption rate was approximated from the impingement rate as no tight transition state is present along the internal reaction coordinate. The reverse reaction rates were derived based on the forward reaction rates to ensure that the thermodynamic equilibria can be reached as enough time has passed. The DFT-based surface reaction rates together with the gas phase reaction mechanism and rate for C2H6 formation from CH3 from literature were employed in the kinetic model. The simulation of the kinetics was performed using the MATLAB SimBiology module . The simulations of the amount of surface species as a function of time start from the surface fully covered by methyl groups (4CH3(s)) at the pressure 100 pascal and the temperature 773 Kelvin. The pressure was held constant by the presence of non-interacting gas molecules (e.g. N2) plus molecules formed after desorption. The number of gas molecules was assumed to be much larger than the number of surface sites, i.e. the initial molar ratio of gas molecules/CH3(s) was set to 50. Two kinetic models are considered here. In the first model, the methyl radicals produced from the reactions are assumed to be completely purged away from the system at all times (i.e. by setting their adsorption rate to zero) similar to when H2, N2 or Ar is introduced between the TMA and NH3 pulse in the ALD cycle, while in the second model no extra pulse is used, i.e. no extra pulse between TMA and NH3 is assumed.
60c74b29469df471e3f43e7b
6
In initial experiments, H2 pulses of different lengths were added between the TMA and NH3 pulses. Without addition of H2 pulse, the C content in an AlN film deposited at 480 °C was measured to be 3.1 at. % from XPS analysis. Addition of an H2 pulse decreased the C content, Figure . A short H2 pulse of 3.3s decreases the C content to around 1 at. %. The carbon content is at this low level also for longer durations of the H2 pulse. The same trend is also observed for a deposition temperature of 400 C (Fig. ). For depositions made at lower temperature, 400 °C, the carbon content is slightly lower without the H2 pulse (2.6 at %) as compared to 480 °C and the amount of carbon decreases to about 1.5 at. % with the addition of a H2 pulse. The growth per cycle of the AlN films is stable upon increasing the length of the H2 pulse, Figure . A slight decrease in the growth per cycle with increasing length of the H2 pulse could be observed. The composition from XPS of the film deposited at 480 C with 19.5 s H2 pulse was 49.1 at.% Al, 46.0 at.% N, 4.3 at.% O and 0.6 at.% C which gives a Al/N ratio of 1.07. High resolution XPS spectra for Al 2p were fitted with two sub-peaks at 74.7 eV and 75.5 eV attributed to Al-N and Al-O bonds respectively (Fig. ). For N 1s, the two sub-peaks were positioned at 397.9 eV and 399.
60c74b29469df471e3f43e7b
7
GIXRD measurement shows that the films consist of polycrystalline, hexagonal AlN and that the crystallinity of the films increases with the time for the H2 pulse, as seen that the intensity of the (100) plane more than triples when the H2 is added (black line) compared to without a H2 (red line) (Fig. ). The full width at half maximum value (FWHM) for the (100) plane also decreases from 0.657 to 0.503 upon addition of the H2 pulse, indicating a higher degree of crystallinity. This shows that adding a H2 pulse not only decreases the carbon content in the film, but also increases the crystallinity. By using the Debye-Scherrer equation an approximate crystallite size of 17 nm was calculated using the FWHM of the AlN (100) peak for 19.5 H2 exposure. The crystallite size was calculated to approximately 17 nm. Figure To test if the reduction in carbon content was due to a surface chemical reaction between surface methyl groups and the hydrogen gas, N2 and Ar were also tested as purge gas between the TMA and NH3 pulse. The pulse time for H2, N2 and Ar was kept constant at 19.5 s. XPS measurements show that the carbon content in the films is 1±0.4 at.% independent of which gas that is used.
60c74b29469df471e3f43e7b
8
This suggests that carbon reduction by the added pulse between TMA and NH3 is not a chemical cleaning pulse, but rather acts to prevent re-adsorption of carbon containing species from the surface. Interestingly, the different gases used influence the crystallinity of the films. GIXRD measurements of films deposited with H2, N2 and Ar show that the intensity of the (100) plane decreases when going from H2 to N2 and to Ar (Fig ). Theta-2theta showed the same trend Intensity (arb. units)
60c74b29469df471e3f43e7b
9
(Fig. ). Also, the composition of the AlN film changes when different gases were used were higher O impurities was detected for N2 and Ar compared to when H2 was used. When N2 and Ar was used 5.2 and 5.5 at.% O was detected compared to 4.3 at.% when H2 was used. The Al/N ratio was 1.07 when H2 was used while the ratio was 1.13 and 1.16 when N2 and Ar was used respectively. We speculate that the hydrogen pulse act to scavenge oxygen from the film surface and that this, together with the higher thermal conductivity of H2 (0.182 Wm -1 deg -1 )
60c74b29469df471e3f43e7b
10
compared to N2 (0.0268 Wm -1 deg -1 ) and Ar (0.0185 Wm -1 deg -1 ) could explain the better crystallinity observed when H2 was used compared to N2 and Ar. The higher crystallinity observed when N2 was used, compared to when Ar was used, is then ascribed to the higher thermal conductivity of N2 compared to Ar. Kinetics simulations were used to obtain an atomistic level understanding of the surface chemistry (Fig. ). Starting from a fully saturated surface (i.e. in the model 4 CH3 groups at 4
60c74b29469df471e3f43e7b
11
surface sites in the model), the first methyl group is seen to quickly desorb from the fully occupied surface. This could be expected since the desorption of a CH3 from a fully occupied surface is actually exergonic (and with a negligible reaction barrier), see Table , which is likely due to the repulsion between the closely-packed surface-adsorbed methyl groups. The
60c74b29469df471e3f43e7b
12
desorption rates become significantly slower with increasing number of empty sites on the surface. This is to be expected since the repulsion between the adsorbed groups decreases as the number of empty sites increases for the most densely packed surfaces. The desorption rates were also found to increase with increasing temperature which is also observed experimentally (Fig. ). The same trend has previously been shown on Al2O3, were higher temperature decreases the amount of surface methyl groups. Figure shows the kinetics when a pulse of inert gas is added between TMA and NH3. Adding the extra pulse between the TMA and NH3
60c74b29469df471e3f43e7b
13
pulse will remove the methyl radicals from above the surface, which facilitates a more accelerated decrease of the carbon group surface coverage by suppressing the re-adsorption process. With no added pulse of inert gas between TMA and NH3 (Fig. ), the methyl groups previously released from the surface can either become re-adsorbed on the empty sites or converted to C2H6 molecules, which is manifested by the persistence of the dominantly CH3terminated surface in Figure . The C incorporation in the film versus time as estimated from the computed surface coverage is shown in Figure . 5. The re-adsorption process leads to a significant increase in the carbon incorporation in the film (Fig. ). On the other hand, when a pulse is introduced between TMA and NH3, the re-adsorption process is suppressed, and the desorption dominates, resulting in the reduction of the carbon incorporation in the film (Fig. ). This correlates with the experimental results in Figure . We suggest that the extra pulse between TMA and NH3 assists in the diffusion of desorbed methyl groups away from the AlN surface and prevents their re-adsorption. In the kinetic simulations (Fig. ) the methyl groups on the surface desorb quickly and after a short time these methyl groups react and form ethane (C2H6). This would facilitate a faster elimination of the carbon groups by suppressing the re-adsorption pathway according to our kinetic model.
60c74b29469df471e3f43e7b
14
Without the extra gas pulse, the desorbed surface methyl groups could re-adsorb before the NH3 is introduced into the chamber which could lead to them being trapped in the film. Carbon impurities in semiconductor grade materials is a problem, typically, when deposited at high temperature and we believe that this ABC-type pulsed process (were A is TMA pulse, B is the extra pulse and C is NH3 pulse) could possibly be a solution to accomplish high temperature ALD films with low C impurities.
60c74b29469df471e3f43e7b
15
We show that by adding an extra purge gas pulse perpendicular to the substrate surface, the carbon impurities in the film decreased drastically to approximately 1 at.% giving a Al/N ratio of 1.07. The surface chemistry and the role of the extra pulse between TMA and NH3 was investigated with quantum chemical and kinetic modelling. It was found from the modelling that the most loosely bound surface methyl groups can desorb quickly with negligible reaction barrier. The added pulse between TMA and NH3 is therefore believed to clean the region just above the surface from desorbed methyl groups, suppressing any re-adsorption of the desorbed methyl groups. It was found both experimentally and from the calculations that the desorption of the surface methyl groups increases with increasing temperature. The growth of the AlN film was somewhat dependent of the extra pulse time were longer pulses gave smaller growth rate which were attributed to less C impurities in the film. The type of gas used as the pulse between TMA and NH3 was found to have a stronger impact on the crystallinity were the crystallinity of the films increased when going from Ar, N2 to H2, we ascribe this to the thermal conductivity of the gases.
64689c18a32ceeff2df3d012
0
ABSTRACT: Supramolecular polymer chemistry has recently garnered considerable attention in multidisciplinary science, however, the mutual choice of supramolecular hosts and monomer units has yet to be demonstrated on a trial-and-error basis. Curved- buckybowls appear to be good "seeds" for supramolecular polymers, however, the very fast bowl-to-bowl inversion in the solution hampers their wide usage. In this study, we found that a pristine buckybowl, sumanene, can form solution-state supramolecular polymers via bowl-to-bowl inversion. We also demonstrated that sumanene supramolecular polymers can be dynamically controlled by external stimuli, in which solvation plays a significant role. This study not only provides new guidelines for the rational design of supramolecular polymers, particularly for the use of buckybowls, but also presents interesting dynamic behaviors of supramolecular polymerization.
64689c18a32ceeff2df3d012
1
Supramolecular polymers have advanced significantly due to their multidisciplinary fusion. An explosive growth in their construction is caused by the infinite combination of supramolecular hosts such as cyclodextrins, 1-4 cucurbit[n]urils, 5-10 calix[n]arenes, and pillar[n]arenes and monomer units such as benzene-1,3,5-tricarboxamides, perylene bisimides, and porphyrins, since Lehn's discovery in 1990. The first supramolecular polymer was created via complementary hydrogen bonds between uracil and 2,6-diacetylaminopyridine. This milestone has been met with subsequent success in the development of self-assembled polyrotaxane and functional supramolecular gels. In 2006, Meijer et al. reported solvent-assisted nucleation upon supramolecular polymerization, which opened up new formation mechanisms for the design of rational monomeric cores. Thereafter, two significant approaches, living supramolecular polymerization and supramolecular chain polymerization were found to precisely control polymer lengths. These scientific backgrounds on supramolecular polymers established that supramolecular polymerization can be dynamically controlled by external stimuli such as solvents and photoirradiation.
64689c18a32ceeff2df3d012
2
Curved- aromatics such as buckybowls, -bent porphyrin derivatives, norcorroles, and pyrroles are particularly attractive compounds that exhibit different characteristics on concave and convex surfaces. For example, in the case of buckybowl corannulene, Aida et al. demonstrated that chaingrowth supramolecular polymerization proceeds in methylcyclohexane (MCH) through intermolecular hydrogen-bonding networks at the side chains of the central corannulene core. In contrast, Würthner et al. reported that self-assemblies of naphthalimide-annulated corannulene occur in MCH, toluene, CCl4, and tetrachloroethane. These results imply that corannulene can effectively induce supramolecular polymerization by introducing appropriate side chains on the central core. Only a pristine buckybowl appears hard to form a supramolecular polymer, most likely due to the shallow bowl depth of corannulene (0.87 Å) that allows for fast bowl-to-bowl inversion in the solution (see Figure , top); hence, discovering a buckybowl "seed" still faces a significant challenge.
64689c18a32ceeff2df3d012
3
In this study, we unexpectedly found that sumanene (Figure , bottom) as a pristine buckybowl, enabled the formation of a buckybowl-stacked supramolecular polymer. Sumanene and its derivatives are known to form columnar structures, a type of supramolecular polymer, when in the crystalline state. More importantly, the deeper bowl depth of sumanene (1.11 Å), leading to the sufficiently slower bowl-to-bowl inversion at room temperature in the solution, is believed to be a critical factor in spontaneous supramolecular polymerization. Herein, we report the unprecedented supramolecular polymerization of pristine buckybowl sumanene. This finding prompted us to examine how external stimuli, such as temperature, pressure, and solvents, affect the supramolecular polymerization behavior. A saturated MCH solution of sumanene (1.95 mM) was dropped onto a highly oriented pyrolytic graphite (HOPG) substrate and observed using atomic force microscopy (AFM). Surprisingly, a bundle-morphological supramolecular polymer was observed (Figure ) with a 74.0 ± 28.6 nm width and a 2.7 ± 1.6 nm height, respectively (all data are given in Figure in SI). The diameter of a single sumanene molecule was calculated to be 8.2 Å, based on the X-ray crystal structure, indicating that multiple columns in a single bundle were deposited on the HOPG substrate. The change in morphology caused by changing the concentration was then investigated. When a diluted 95.3 M MCH solution was dropped onto the HOPG substrate, the bundled supramolecular polymers that were previously observed in the saturated solution were no longer present, and only globule aggregates were observed (Figure ). Therefore, the morphology can be changed by adjusting the solution concentration. To further investigate the change in morphology by changing the solvent, a 2.08 mM solution of CH2Cl2 was dropped onto the HOPG substrate; although the concentration was almost the same as that of the MCH solution, only globule aggregates were observed (Figure ). In contrast, a saturated CH2Cl2 solution of 34.3 mM, again yielded bundle-like supramolecular polymers (Figure ). These findings can be attributed to the fact that the morphology (polymer vs. aggregate) of sumanene is dynamically controlled by the concentration and solvent choice. Solvent effects are critical factors governing the states of sumanene polymers (vide infra), hence, for the first time, buckybowl-based supramolecular polymers were observed using AFM. Next, we investigated the solution-state behavior of sumanene supramolecular polymers. As shown in Figure , concentration-dependent UV spectra of MCH solutions of sumanene (4.1 M to 2.09 mM) were measured. They demonstrated that the extinction coefficients in the 0-0 band at 356 nm decreased with increasing concentration. The degree of polymerization (agg; see S9-10 in the SI) vs. Ct plot, i.e., the sigmoidal curve, indicates the formation of sumanene supramolecular polymer in MCH by the isodesmic model; its equilibrium constant (K) was calculated to be 18600 M -1 (Figure ). For example, in the case of the cooperative model, the conformations, electronic states and dipole moments in each monomer steadily change during the growth stage upon polymerization. However, as shown in Figure , density functional theory (DFT) calculations revealed almost no change in the dipole moments in each monomer unit (from monomer to heptamer), clearly indicating that sumanene polymerization by the isodesmic model is the most appropriate model (see the comparison of isodesmic vs. cooperative model in Figure and Table ). The number-average degree of polymerization (DP; see S9-10 in SI) can be calculated from the equilibrium constant; the DP at 2.09 mM was 6.7 mer.
64689c18a32ceeff2df3d012
4
The shape and size of the supramolecular polymers formed by sumanene in a solution were then investigated by small-angle X-ray scattering (SAXS). As shown in Figure , the broad peaks, i.e, multi components, were observed at q = 4-7 nm -1 in an MCH (2.07 mM, DP = 6.7) solution of sumanene. The appropriate model was used and it was assumed that the sumanene supramolecular polymer is cylindrical (see S12 in the SI). The results were reasonably fitted to a sum of tetrameric and pentameric polymer functions, as shown in Figure . Therefore, these SAXS experiments and spectroscopic measurements suggest that sumanene forms tetrameric to septameric supramolecular polymers in MCH-saturated solutions. The construction of solution-state sumanene supramolecular polymers prompted us to examine the external stimuli-responsive dynamic control. Temperature control was tested in MCH. As shown in Figure , no change in the extinction coefficient was observed in the temperature range from 0 to 60 °C, suggesting that the effect of temperature on the formation of sumanene supramolecular polymers is negligible in the measured temperature range. This may indicate thermodynamic stability of the MCH-assisted supramolecular polymers.
64689c18a32ceeff2df3d012
5
Next, we attempted to control the formation of supramolecular polymers using hydrostatic pressure. To date, we have revealed the hydrostatic-pressure-induced spectral responses of functional molecules, supramolecules, polymers, and biomacromolecules. This hydrostatic pressure-control concept can be accounted for using Eq. 1, in which the volume change term (V°) related to solvation/desolvation and conformational changes, plays a decisive role in modulating dynamic equilibrium systems. Thus, this approach was applied in the present sumanene polymerization case. The concentration-dependent UV spectra in 7:3 (v/v) MCH-CH2Cl2 were measured at four pressures ranging from 100 to 250 MPa. As shown in Figures , no new absorption bands appeared upon hydrostatic pressurization. However, decreases in the extinction coefficients were observed with increasing concentrations, as was the case at atmospheric pressure (0.1 MPa). The V° value was then calculated from each K (Tables ) upon supramolecular polymerization at the four measured pressures, according to Eq. 2.
64689c18a32ceeff2df3d012
6
The slightly positive V° as 2.7 ± 0.7 cm 3 (Figure and Table ) suggests that the sumanene supramolecular polymerization responds little to hydrostatic pressure. This may indicate that the sumanene monomeric seeds stack densely on top of each other in the supramolecular polymer, even under atmospheric pressure. Finally, here, concentration-dependent UV spectra were measured by varying the mixing ratios of MCH in MCH+CH2Cl2 (see Figures and Tables ). As described above, K in MCH was 18600 M -1 , whereas in CH2Cl2 it was 630 M -1 , indicating a 30-fold change in K depending on the solvent character. Supramolecular polymers formed via the isodesmic model at all solvent mixing ratios. Moreover, a natural logarithm plot of K upon supramolecular polymerization as a function of the volume fraction of MCH in MCH+CH2Cl2, shows the existence of a threshold at VMCH/Vtotal as 0.25 (Figure and Table ). This threshold is attributable to the different solvations of MCH and CH2Cl2, both of which shape the solvent core around the polymer. As seen in Figure , in the CH2Cl2rich region, the sumanene monomer is much more stabilized/solvated by the better solvator, CH2Cl2, leading to lower formation of a supramolecular polymer owing to monomeric action. In contrast, in the MCH-rich region, the formation of a supramolecular polymer is much more energetically stable than monomer solvation by MCH. Therefore, it can be emphasized that K of the supramolecular polymers can be drastically switched around the threshold. In conclusion, for the first time, solution-state supramolecular polymerization of pristine buckybowl sumanene has been demonstrated. We also found that the solvent, rather than temperature and pressure, can dynamically control sumanenebased supramolecular polymerization at the threshold of the MCH-rich ratio in MCH+CH2Cl2. Finally, it should be emphasized that our findings will provide not only new design guidelines for constructing supramolecular polymers but also dynamic control approaches using relevant external stimuli.
655417d1dbd7c8b54b477786
0
The text message that originates from the front-end is then transmitted to the ChatGPT back-end, which was built on the foundation of CallingGPT. This allows a Python function documented in Google style docstring to be converted into a JSON format recognizable by ChatGPT, which can then be invoked whenever ChatGPT finds it necessary. Furthermore, it closes a feedback loop between ChatGPT and the local Python function, as the suggested function will be immediately executed locally and its return value will be sent back to ChatGPT. One obstacle we faced was that some tasks required the execution of perhaps three functions consecutively. However, ChatGPT didn't always complete all the steps; it might return an incomplete or even failure message after executing one or two functions. To enhance the robustness of function chain-calling, we implemented a simple trick in CRESt, which is basically adding more "prompt guidance" in the function return message. This approach turned out to be somewhat effective. Specifically, ChatGPT receives one of three templates after it calls a function:
655417d1dbd7c8b54b477786
1
1. Call again: "function successfully called with return value: {ret_val}, please call this function again." 2. Proceed: "function successfully called with return value: {ret_val}, please go to the next step." 3. Function guide: "function successfully called with return value: {ret_val}, please call {function_name} next." To improve the robustness of the workflow, multiple branches are better to be preset in local Python functions, each ending with one of the templates listed above. For example, if the argument that ChatGPT provides fails to pass the assertion check, it will enter a branch with a "call again" template. With the failure reason included in the return value and "please call this function again" articulated, ChatGPT will be more likely to reattempt the same function, rather than immediately replying to the user with a failure message in text. In the future, it would be highly beneficial if OpenAI could provide more knobs over the function calling API (e.g. probability distribution among functions and the threshold for calling), thereby making the behavior more controllable.
655417d1dbd7c8b54b477786
2
Active learning is considered a good starting point for autonomous experimental science since it works pretty well with small datasets. Data acquisition is the most significant challenge in learning projects that involve real-world experiments. Unlike in the virtual world, each datapoint in the real physical world could be fairly expensive and timeconsuming to obtain -often a dataset of 1000 points is considered substantial. Given these conditions, the strategy for sampling the design space becomes of paramount importance. The primary function of active learning is to interactively suggest the parameter combination to test in the next batch, backed by rigorous mathematical principles. There are various nice frameworks on GitHub, and the one we implemented in CRESt is the Ax platform, developed by a team at Meta, and built upon BoTorch. Ax offers a well-implemented SQL storage option, allowing one to resume the previous active learning campaign even if the GPT backend is reset, by retrieving the history stored in the database.
655417d1dbd7c8b54b477786
3
End-effectors are a set of subroutines ready to be invoked via HTTP requests. Some of these may involve information retrieval tasks (local or public database queries like the Materials Project ), while others could have tangible real-world impact, as we have shown in the demo (liquid handling robot, laser cutter, pump, gas valve, robotic arm), primarily the components in data collection. The automation of these devices is mostly handled by PyAutoGUI, a script that can simulate human mouse and keyboard actions. However, we expect this redundant step will eventually become obsolete, as most laboratory equipment should ideally provide a dedicated interface for AI access. Boiko et al. previously reported an exceptional work using ChatGPT4 to script Opentrons OT-2 liquid handling robot for organic chemicals synthesis. It is truly fascinating that the AI can independently design experiments, however, we noticed that providing too much freedom to LLMs could lead to some hidden troubles. Due to the intrinsic stochasticity of LLMs, the real-time generated OT-2 script may not perfectly align with the intended design. The greatest risk stems from our inability to discern whether the script deviates from the intended design without a meticulous line-by-line inspection. This is mostly because (i) LLMs are highly skilled at generating seemingly correct code, and (ii) many OT-2 codes are repetitive and somewhat abstract, as people couldn't freely rehearse the mixing process in their mind by looking at 'A1', 'A2'. Plus, the deviation from the intended design may not be flagged as an error since the syntax could still be correct, increasing the risk in long-term learning projects as tracing potential erroneous data points becomes a great challenge. As a mitigation strategy, we adopted a balanced approach: allow LLMs the freedom to devise the recipes (in CRESt, this is primarily driven by active learning), while having the ot2 scripts' framework fixed and preset by human experts.
655417d1dbd7c8b54b477786
4
What Large Language Models (LLMs) can bring to the realm of science and engineering is a question that we have been pondering since the advent of ChatGPT. There is no question that LLMs has already shown its superb potential as a literature reviewer, all one needs is to feed more literature with full content to it. Then, what else? Beyond the role of experimentalist's assistant, which we have just developed in the form of CRESt, we envision it will also play a transformative role in at least the following three dimensions: Instrument coach. Presently, researchers must comprehend the theoretical basis of any technology they wish to utilize, along with the specific operations (sometimes empirical rules or "tricks") of an individual instrument, which can vary significantly from one manufacturer to the other. This latter requirement involves a non-trivial effort, e.g. a series of training sessions for a shared facility or reading a 200-page manual for a group-owned instrument, but is it truly indispensable? We foresee that, in the imminent future, researchers will merely need to articulate their needs in plain language, and LLMs will translate these into the optimal parameter settings (which is what an instrument specialist is doing now). Upon request, the exact part in the manual book can be referred to for the users to further investigate into. Technically, this can be made possible by appropriately fine-tuning an LLM base model by the vendor, which can be done today.
655417d1dbd7c8b54b477786
5
Pipeline diagnostician. LLMs could help identify the root causes of irreproducible results when paired with multi-sensors equipped robots or drones. In the future, an ideal experimentation paradigm is to log the lifetime recording and all the metadata of every sample. When an inexplicable phenomenon occurs, all related logs can be input into the multi-modal LLMs for analysis. Leveraging its superior hypothesis generation ability, the model can propose a list of potential causes, allowing human experts to further investigate the top few that they deem likely. This approach could also be applied in industrial processing pipelinesif a significant drop in the production yield is noticed, LLMs can be employed to identify the "culprit", with human engineers stepping in when complex realworld adjustments are required. This role becomes viable when LLMs can process vast amounts of images (videos), and will be further enhanced when multimodal information (vendor-provided metadata for samples, moisture sensing, sound sensing, etc.) gets well aligned with the visual information.
655417d1dbd7c8b54b477786
6
Mechanism narrator. We anticipate LLMs will excel in applying established scientific principles to novel experimental phenomena. A significant portion of the work in the scientific mechanism exploring stage are pattern matching tasks (e.g. extracting subtle features from a spectrum and comparing with the standard database), which fall within the competence of LLMs. A typical procedure could be as straightforward as asking LLMs: a sample with this composition and processing history exhibited superior performance, here are all the characterization results (Scanning Electron Microscope, X-Ray Diffraction, etc.), please explain why this result is so good. Human researchers may examine the most reasonable explanations from a range of narratives LLMs generate, and start the scientific discussions from there. However, this process will pose the greatest challenge, with requirements including (i) image input and alignment with scientific terms, (ii) information retrieval capabilities from online physical scientific databases, (iii) LLMs being pretrained on scientific journal main context and supplements, (iv) cutting-edge subfield ML models invokable as plug-ins.
655417d1dbd7c8b54b477786
7
CRESt is just a starting point of how LLMs can assist experimental scientists, and we believe the true potential of LLMs lies in their hypothesis-generating capability. Humans possess a relatively limited knowledge base but exceptional causal inference capabilities, allowing us to produce pinpoint hypotheses, albeit not in large quantities. In contrast, relying on the extensive knowledge base and the emerging ability to discover the key pattern in a large datasheet via the Excel plugin, AI can generate numerous hypotheses in little time, but is usually less discerning today. Therefore, this is not a story of AI competing with humans, but rather one of AI complementing humans. In the paradigm of "AI suggests, humans select", we hope the strength of both parties can be leveraged.
655417d1dbd7c8b54b477786
8
Pipeline Diagnostician with Meta Glasses (updated on Oct 25, 2023) With the launch of ChatGPT-4V and Meta smart glasses , we've managed to integrate CRESt to visualize the lab earlier than anticipated, with the demonstration video available on YouTube. By the time the demo was concluded, Meta smart glasses were closedsource, and OpenAI had yet to unveil official support for image uploads via its API. Consequently, we resorted to incorporating numerous clunky automation code snippets to ensure the workflow's functionality, much of which we anticipate will become redundant soon. Figure is a breakdown of the primary workflow: The users will be wearing Meta Wayfarer smart glasses, paired with a physical Android phone (as the Windows Subsystem for Android doesn't support Bluetooth connectivity and therefore can't be employed). Upon verbal command "Hey Meta, take a picture", the glasses capture an image, storing it locally. An additional step is required to transfer the image from the glasses to the Android phone, achieved through a WIFI hotspot connection originating from the glasses. This extra step is automated by mirroring the Android screen on a PC (via scrcpy ) and simulating gestures (via PyAutoGUI 20 ). Following this, the ADB tool is utilized to retrieve the image from the Android device to the PC. An automated web script, powered by gpt4-image-api , subsequently uploads the image to the ChatGPT website, auto-fetching a text response from ChatGPT-4V, which is then channeled to the CRESt platform for the final answer.
655417d1dbd7c8b54b477786
9
Figure Schematic of the architecture for the pipeline diagnostic system. Automation scripts using PyAutoGUI and ADB were utilized to facilitate the image transfer process, from the internal memory of Meta Smart Glasses to a PC. A visual agent, utilizing the ChatGPT-4-Vision model, is integrated alongside CRESt to assist in image analysis, and the extracted data is then relayed back to CRESt for comprehensive processing.
655417d1dbd7c8b54b477786
10
Autonomous Scanning Electron Microscope (updated on Nov 10, 2023) With the GPT-4V released on Nov 6, 2023, we are able to achieve an exampling prototype of the instrument coach discussed in the outlook, the autonomous Scanning Electron Microscope (SEM). SEM is one of the most commonly used characterization instruments in natural science research. A standard SEM may require a dedicated room for placement and come with hundreds of pages of detailed operation manuals. This leads to a high learning cost, including its working principles and mastering operational skills. However, for most scientific researchers, these learning costs may become less necessary in the upcoming AI era. In the near future, researchers will only need to describe the kind of SEM image they need in natural language, similar to what is demonstrated in this demo video. Once the researcher states the final objective, no further intervention is required. The AI will independently iterate and eventually deliver a satisfactory SEM image.
655417d1dbd7c8b54b477786
11
The entire workflow is realized through the collaboration of three GPT agents, as shown in Figure . The CRESt interaction layer uses the OpenAI GPT-4 model with function call capability. It is responsible for direct voice interaction with users and, at appropriate times, calls upon the SEM agent and relay natural language instructions from the user. The SEM agent, which is also based on the GPT-4 model and forms the core of the architecture, begins operations on Phenom Pharos, a Python-controlled SEM, upon receiving user instructions. After automatic focusing and adjustments in contrast/brightness, the SEM agent transmits the captured SEM images to a vision-capable agent (expected to be integrated within SEM agent in the future, when OpenAI API permits). The vision agent extracts information from the images, processed with the help of Set-of-Mark (SoM) , and provides next-step SEM operational suggestions back to the SEM agent. This cycle iterates until the vision agent determines that the current SEM image fulfills the user's objective, at which point the SEM agent presents the final SEM image and a summary report to CRESt. This autonomous SEM still has significant room for improvement in both functionality and performance aspect, but our preliminary experiments have yielded some fascinating insights. Firstly, we were very impressed to see the default GPT-4V can indeed identify martensite region without any additional information beyond the mention of this term. We anticipate that with more specialized data provided during fine-tuning, this SEM agent can be seriously deployed in practical research production. Secondly, the integration of the SoM module significantly enhances the result from visual agent. Prior to the SoM layer addition, LLMs could only imprecisely refer areas in SEM images, such as "upper right" or "bottom left," making it difficult for the SEM agent to accurately navigate to the desired region of interest (ROI). After the SoM layer is introduced, shown in Figure , the visual agent can now explicitly refer the region under discussion. However, caution is needed when using SoM. We observed that relying solely on SoM-processed images hinders the GPT-4V's ability to identify the martensite region. After careful analysis, we realized that the GPT-4V identifies the martensite region based on a description "needle-like", which worked great on original SEM image. Yet, with SoM-treated image, it will try to find the region that has a needle-like boundary (e.g. Figure region p), instead of needle-like morphology inside of the region (e.g. Figure region a/d/g). This issue got resolved when both the original and SoM-processed images are provided. Lastly, we explored the SEM's quantification ability. One of our expectation for the visual agent is to estimate the coordination of ROI so that the SEM agent can center ROI and zoom in. Initially, we followed the mindset of us human, which was to estimate the distance based on the scale bar provided in the bottom left corner, but the results were never satisfactory. Surprisingly, we found that calculating distances based on fractions of the field width (to visual agent it's the width of the whole image), e.g. "one fourth of the FW," effectively addressed this challenge.
63ef660a9da0bc6b3313b7e0
0
One of us (AG) has been fortunate in knowing the late F. Ann Walker pretty much all of his professional life. Her many talks on advanced NMR and EPR studies of heme proteins at national American Chemical Society meetings and at Gordon Research Conferences made a profound impact on me and the entire heme protein and model compound community. Her papers, review articles, and book chapters, likewise, survive as exemplars of scholarship. Add to that an exemplary record of service, perhaps most notably as long-time Associate Editor of JACS, and mentoring, and you have a rare role model who inspired multiple generations of bioinorganic chemists. Below are a few first-person reminiscences.
63ef660a9da0bc6b3313b7e0
1
Ann and I crossed paths when we both started studying corroles at the start of this century. She drew on her expertise of heme NMR spectroscopy to conclude that FeCl corroles were best described as Fe III Cl-corrole as opposed to Fe IV -corrole 3-. We had reached the same conclusion in our laboratory based on DFT calculations. The proposal met some resistance but Ann remained steadfast in her views, masterfully summarizing the evidence in a special issue of this Journal that I edited. Ann was an avid traveler. Every year, I eagerly awaited her Christmas letter to read about her adventures, often in Latin America, but also in Europe and Africa. I have fond memories of her visit to my laboratory in Arctic Norway. We spent a long day driving around Kvaløya (Whale Island), an island neighboring the city of Tromsø, taking in views of fjords, mountains, and waterfalls and occasionally stopping to whip up Scandinavian shrimp sandwiches.
63ef660a9da0bc6b3313b7e0
2
Compared with metalloporphyrins and metallocorroles, cobalamin and F430 models remain less explored. An important recent development has been the availability of the monoanionic B,C-tetradehydrocorrin ligand and its coordination to cobalt and nickel states (Figure ). Both metals have yielded complexes formally at the M(I) state. In the nickel case, the neutral complex has been fairly conclusively assigned to a Ni II -TDC •2-state, but the nature of the neutral Co complex is more subtle. A neutral cobalt corrin has been traditionally thought of as a d 8 Co(I) complex; however, advanced ab initio calculations have suggested a more multiconfigurational description. Herein we have examined the low-energy states of C2-symmetrized M[TDC] (M = Co, Ni) complexes at the 0 and +/-charge with high-quality density functional theory (DFT) calculations. Symmetrization allowed us to calculate different electron occupancies for the two irreducible representations in question and thereby to determine the relative energetics of metal-versus ligand-centered redox in both the neutral and ionized states of the molecules. Such exercises have a long and successful track record vis-a-vis porphyrin-type molecules and have shed a good deal of light on electronicstructural aspects of manganese porphyrins, low-spin ferrihemes, cobalt dipyrrinbisphenolates, nickel porphyrins and hydroporphyrins, and metallocorroles including metal-metal-bonded metallocorrole dimers, as well as on the photoelectron spectra of porphyrins. The present exercise allows for greater certainty in the interpretation of redox behavior Co and Ni corrinoids as well as new insights into metal-ligand interactions in these systems. Our M[TDC] model is a slightly simplified and symmetrized analogue of the system experimentally studied by Lindsey, Nocera and their associates (Figure ). The simplification consists of merely replacing a meso-p-tolyl group with a phenyl group so as to generate a C2symmetric model. Two well-tested exchange-correlation functionals were generally used -OLYP and B3LYP*, the latter containing 15% Hartree-Fock exchange relative to B3LYP, which contains 20%. Both functionals were augmented with Grimme's D3 dispersion corrections. A spin-unrestricted formalism and all-electron ZORA STO-TZ2P basis sets were used throughout. Appropriately fine meshes for numerical integration of matrix elements were employed, as were suitably tight criteria for geometry optimizations. as previously observed, the pure functional OLYP exhibits a certain preference for spin-paired states, whereas the hybrid functional B3LYP*-D3 favors a greater degree of spin decoupling, i.e., classic behavior for the two classes of functionals. These results point to a multiconfigurational ground state for Co[TDC] and indeed for cobalamin and "Co(I)" corrinoids in general, with several low-energy excited states. As for why the TDC ligand fails to decisively stabilize a true Co(I) relative to a
63ef660a9da0bc6b3313b7e0
3
Thus, the adiabatic electron affinity (EAa) of K[TDC] (with a redox-inactive K + ion at its core) turned out to be 1.9 eV, far higher than that of a typical electronically innocent metalloporphyrin, which typically hover around 1.2-1.3 eV. The electronic descriptions of the ionized states of Co[TDC] appear to much more straightforward relative to that for neutral state. The lowest-energy cation (denoted C1) is a straightforward low-spin Co(II) species with a dz2 1 electronic configuration, with a dp description, with the Ni(I) state about 1 eV higher in energy. In this respect, Ni[TDC] does not mimic cofactor F430 as well as ligands such as iaobacteriochlorin, oxaporphyrin, and thiaporphyrin, which stabilize the Ni(I) state. Remarkably, the B,Ctetradehydrocorrin ligand also appears to differ from Ni corrin and other Ni dehydrocorrins, which are thought stabilize an Ni(I) state. Our calculations also indicate unambiguous ligand-centered oxidation and reduction. Thus, the {Ni[TDC]} + cation is a straightforward, low-spin Ni(II) species, whereas the anion is clearly describable as Ni II -L 3-. For both ionized states, the triplet states are significantly higher in energy relative to the singlet ground states. Bottom right: canonical MO best described as SOMO.
63ef660a9da0bc6b3313b7e0
4
In summary, high-quality DFT calculations and judicious use of group theory have led to significant insights into the question of metal-versus ligand-centered redox in Co and Ni tetradehydrocorrin complexes. For the +1 states, both metals occur in their low-spin M(II) forms. In contrast, the charge-neutral states vary for the two metals: while Co(I) and Co II -TDC state are comparable in energy for cobalt, a Ni II -TDC •2-state is clearly preferred for nickel. The latter behavior may be contrasted with other corrinoids that reportedly do stabilize a Ni(I) center.
673af1a55a82cea2fa7f7cf6
0
Asthma is a chronic respiratory condition characterized by inflammation and narrowing of the airways, resulting in breathing difficulties. It manifests as wheezing, shortness of breath, chest tightness, and coughing, particularly at night or during physical activity (Global Initiative for . Asthma affects individuals of all ages, with varying degrees of severity, and is often triggered by environmental factors such as allergens, air pollution, and respiratory infections .
673af1a55a82cea2fa7f7cf6
1
Studying asthma epidemiology is crucial for several reasons. First, it helps identify the burden of asthma on public health systems, informing resource allocation and management strategies . Additionally, understanding the patterns and risk factors associated with asthma can guide preventive measures and the development of targeted interventions, ultimately improving patient outcomes .
673af1a55a82cea2fa7f7cf6
2
Asthma prevalence in Africa varies widely across regions, influenced by environmental, genetic, and socioeconomic factors. Recent studies indicate that asthma affects approximately 10-20% of the population in some urban areas, with significant differences between rural and urban communities . Factors such as exposure to allergens, air pollution, and limited access to healthcare contribute to the high burden of asthma in the continent . Understanding these regional variations is essential for developing effective public health strategies to combat asthma in Africa.
673af1a55a82cea2fa7f7cf6
3
Asthma is a chronic inflammatory condition of the airways characterized by reversible airflow obstruction, bronchial hyper reactivity, and inflammation. Affecting millions of people globally, asthma causes episodic symptoms such as wheezing, breathlessness, chest tightness, and coughing. These episodes can vary in frequency and severity, and can be triggered by various environmental, genetic, and lifestyle factors.
673af1a55a82cea2fa7f7cf6
4
A key feature of asthma is airway remodeling, which involves structural changes in the bronchial walls due to prolonged inflammation. These changes include subepithelial fibrosis, increased smooth muscle mass, and mucus gland hyperplasia, contributing to persistent airflow limitation in severe cases . The hyper reactivity and remodeling result in increased sensitivity of the airways to stimuli that would not affect a non-asthmatic individual.
673af1a55a82cea2fa7f7cf6
5
Risk factors for developing asthma include a family history of asthma or other atopic diseases, such as eczema or allergic rhinitis. Occupational exposures to irritants like chemicals and dusts are also linked to the development of asthma in adults . In children, exposure to tobacco smoke, especially in utero or early childhood, increases the risk of asthma development .