id
stringlengths 24
24
| idx
int64 0
402
| paragraph
stringlengths 106
17.2k
|
---|---|---|
6704530012ff75c3a1c36ad8 | 12 | The parameters of the DC functional optimized with each of the four methods and the original method are given in Table ; the entries for method 4 are shown for m = 0.96 and are labeled DC24, which will be justified below. The parameters for methods other than method 4 with m = 0.96 are shown (at the request of a reviewer) only to show the effect of modifying the definition of D(r) on the functional parameters. We do not recommend using these parameters for applications. b New method 4 with m = 0.96. |
6704530012ff75c3a1c36ad8 | 13 | The accuracy of the four new methods is compared to the original method in Table . The signed errors for the individual data points can be found in Table in the Supporting Information. As shown in Table , all four methods have consistently slightly better accuracy for barrier height than bond energies. In the discussion below, we will focus on the overall MUE. In all four methods, we ensure that the total density ππ(π«π«) is the sum of the effective spin densities ππ οΏ½ ππ (π«π«) and ππ οΏ½ ππ (π«π«). This is the only improvement in method 1, which gives a slight improvement in the overall MUE. As a workaround to overcome the negative ππ οΏ½ ππ (π«π«), method 2 does not improve the accuracy of the DC functional. In methods 3 and 4, we not only conserve the total density ππ(π«π«), but we also have a more physically well-defined ππ οΏ½ ππ (π«π«) and ππ οΏ½ ππ (π«π«). All four methods have very similar errors as compared to the original method, with only minor differences, but as a side benefit of giving the effective spin densities ππ οΏ½ ππ (π«π«) and ππ οΏ½ ππ (π«π«) a clearer physical meaning, new methods 1, 3, and 4 have a slight improvement on the overall MUE, especially for methods 3 and 4, where a 9% decrease in overall MUE is observed. |
6704530012ff75c3a1c36ad8 | 14 | Figure (a) compares the overall MUE of method 4 with various ππ values. The functional parameters are optimized for each ππ value. As an overall trend, MUE decreases as ππ increases, with ππ = 1 (i.e., method 3) having the lowest MUE. For practical purposes, it is desirable to have an ππ value that is low in MUE while having a numerically stable gradient. By analyzing Figures and(a), we empirically selected ππ = 0.96 as the best-compromise ππ value. Figure shows that the MUE does not improve very much in going from m = 0.96 to m =1, while Figure shows ππ(ππ) is reasonably smooth at ππ = 1 when m = 0.96. We therefore chose new method 4 with m = 0.96 as our new functional, and we name it DC24. |
6704530012ff75c3a1c36ad8 | 15 | For comparison purposes, at the request of a reviewer, Figure (b) shows the overall MUE for variants of method 4 in which m is varied, but the parameters are fixed at their values for method 3. The curve in panel (b) is smoother than that in panel (a), but the MUE is much higher at smaller ππ values due to the functional parameters not being consistently optimized. |
6704530012ff75c3a1c36ad8 | 16 | In this study, we evaluated four new functional forms to construct DC functionals. As a result of this evaluation, we presented a new functional called DC24 that supersedes our two previous DC functionals. As compared to our previous functionals, DC24 has improved accuracy and improved physical interpretation. It has a MUE of 1.73 kcal/mol over the database used for parameterization, which is a 9% improvement compared to the earlier version of the DC functional. It also has a much clearer physical meaning than the earlier version and has the advantage of having continuous functional derivative. |
6704530012ff75c3a1c36ad8 | 17 | The development of the new functional also leads to a new expression for the unpaired electron density. Unpaired electron density is an interpretative tool of quantum chemistry, and the literature already contains multiple definitions with various pros and cons. The new functional form presented here, which is the combination of eqs 5 and 13, has the feature that the density at a point in space is always less than or equal to the density at that point, which is a clearly desirable constraint. This definition of the unpaired electron density may be useful in other contexts as well as for the purpose for which it is used here. |
6704530012ff75c3a1c36ad8 | 18 | We recommend using DC24 for future DC functional applications, and it can also serve as a starting point for the development of even better DC functionals, for example by adding other ingredients like kinetic energy density, which has been very successful in Kohn-Sham theory. While the current work provides a proof of concept of a new functional form for DC functionals, the training set is not diverse enough for broadly accurate parameterization, and more work needs to be done to develop accurate DC functionals for general applications, including designing more physical and flexible functional forms and performing functional training over a more diverse database. |
6704530012ff75c3a1c36ad8 | 19 | We can also prove that ππ(π₯π₯) β€ π₯π₯ for all 0 β€ π₯π₯ β€ 2 and 0 < ππ β€ 1. The first derivative of ππ(π₯π₯) at π₯π₯ = 0 is always less than or equal to 1, with equality if and only if ππ = 1, as shown by the inequality of arithmetic and geometric means. |
67a2399481d2151a02276602 | 0 | Cyclic anhydrides are useful molecules, used either as intermediates to platform chemicals, or as monomers for the production of polyesters. The current industrial production of cyclic anhydrides is based on the oxidation of fossil feedstock, as exemplified by succinic anhydride (SA), which is mainly derived from n-butane. Alternatively, cyclic anhydrides can be obtained after the catalytic carbonylation of oxygenated substrates: epoxides, Ξ²-lactones, and Ξ±,Ξ²-, 12 Ξ²,Ξ³-, or Ξ³,Ξ΄-unsaturated acids. In particular, SA can be obtained from Ξ²-propiolactone (PL) or acrylic acid (AA), which have the potential to be bio-sourced through intermediates like glycerol, lactic acid, 3-hydroxypropanoic acid (3-HPA), or ethanol (Scheme 1). Provided this carbonylation takes place with carbon monoxide produced by reduction of carbon dioxide or gasification of biomass, this would mark a step forward towards sourcing the carbon skeleton of SA from renewable feedstocks (i.e. biomass or chemical waste). After a first attempt by Tsuji in 1969 that afforded a moderate yield in anhydride under harsh conditions (100 bar CO, 150 Β°C, Scheme 2a), efforts have focused on incorporating a Lewis acid in order to activate the lactone and enable its carbonylation. The group of Coates have designed the state-of-the-art catalyst systems for such reactivity: an ionic pair composed of [Co(CO)4] -and a metallic-based Lewis acidic cation (Scheme 2b). As a fine-tuned catalyst was required to afford a selective carbonylation process, the aluminium-or chromium-based cations were designed by coordination to substituted tetradentate salen or porphyrin ligands which require multiple steps of synthesis and purification. The high activity of the synthesized species comes at the price of its sophistication. |
67a2399481d2151a02276602 | 1 | In turn, we envisioned that the in-situ combination of [Co2(CO)8] with a Lewis base (LB) could lead to its disproportionation, and thus to the generation of an active species for the carbonylation of PL, with the formula: Lewis acidic (di)cation + [Co(CO)4] -(Scheme 2c). The effect of Lewis basic additives on cobalt carbonyl-catalyzed carbonylations have been studied since the 1950s, with varied results according to their coordination force, their lability, their BrΓΈnsted basicity, or their concentration. Although their exact role in the catalysis was not always clear, the authors suggested that these Lewis bases could promote the formation of the active species from the precursor [Co2(CO)8], i.e., [HCo(CO)4] or [Co(CO)4] -, and/or stabilize some reaction intermediates. The speciation of the complexes resulting from the coordination of the LB on [Co2(CO)8] has been established for some LB: 19 for instance, phosphines may yield [Co(CO)3(LB)2] + +[Co(CO)4] -(along with the substitution complex [Co(CO)3(LB)]2), 32 whereas pyridine, N,N-dimethylformamide (DMF) or acetonitrile lead to [Co(LB)6] 2+ +2[Co(CO)4] -. It is noteworthy that this does not necessarily inform the nature of the exact active species during the carbonylation, because of the many possible equilibria between cobalt complexes. In the past few years, efforts have been made to identify them thanks to in-situ IR spectroscopy. In 2021, the group of Dong showed that the active species for the carbonylation of thietane arises from the disproportionation of [Co2(CO)8] by the substrate itself, and comprises an unidentified Co 2+ dication alongside [Co(CO)4] -. In 2024, the groups of Gusevskaya and Beller described the catalytic performances of [Co2(CO)8]/OPCy3 for the hydroformylation of epoxides: their IR studies confirmed the promoting effect of OPCy3 on the formation of the active species [HCo(CO)4], but they could only hypothesize on the structure of the disproportionation product, [Co(LB)6][Co(CO)4]2. Another example was provided by the group of Alexanian, who reported a photochemical version of the hydroaminocarbonylation of alkenes: they proposed that [Co2(CO)8] would be disproportionated by the amine substrate, yielding the ionic pair [cation] + [Co(CO)4] -, followed by its light-promoted decarbonylation to the active species [Co(CO)3] -. These recent works show also a regain interest in studies of the potential of generating active species from the disproportionation of [Co2(CO)8]. |
67a2399481d2151a02276602 | 2 | In 1962, Iwanaga and coworkers published two studies in which they were able to titrate [Co(CO)4] -from mixtures of [Co2(CO)8] in methanol, ethanol, ethyl acetate, acetone, THF, dioxane, or acetonitrile. This suggests that all of these solvents are able to disproportionate [Co2(CO)8] to some extent. We reasoned that a Lewis basic enough solvent would be able to generate [Co(CO)4] -as well as a cobalt cationic species exhibiting Lewis acidity. We screened multiple solvents, including some classified as 'recommended' as regards their environmental impact, 38 in order to tune the Lewis acidity of the in-situ formed cobalt (di)cation and applied our catalytic system to the carbonylation of Ξ²-lactones (Scheme 2). A screening of ten solvents in the presence of catalytic amounts of the commercial precursor [Co2(CO)8] was thus run for the carbonylation of Ξ²-propiolactone (PL) to succinic anhydride (SA). Our study includes acrylic acid (AA), a substrate more challenging than PL despite these two substrates being isomers, and whose catalytic cyclizing carbonylation has been recently achieved by our group. The scope was also extended to substituted derivatives of PL and AA: Ξ²-butyrolactone (BL), crotonic acid (CA), and methacrylic acid (MA). |
67a2399481d2151a02276602 | 3 | The methodology of this study relies on the screening of three sets of CO pressure (5, 15 and 50 bar) and ten common organic solvents: toluene, anisole, acetonitrile (MeCN), dimethyl carbonate (DMC), 1,4-dioxane, dimethoxyethane (DME), ethyl acetate (EtOAc), acetone, tetrahydrofurane (THF), and N,N-dimethylformamide (DMF). There is an interplay between CO pressure and the nature of the solvent for the generation of a potential active species (Scheme 3). While this step depends of course on the coordination properties of the LB, balance should also be found in terms of CO pressure: CO may be released during the disproportionation, which is then favoured at lower pressure, but concurrently CO pressure is required for the carbonylation itself. |
67a2399481d2151a02276602 | 4 | The screening is reported in Table . Gratifyingly, not only some catalytic activity is observed in virtually all solvents tested, but a quantitative yield of SA is obtained in MeCN under 15 bar of CO (Table , entry 8). Some general trends can be extracted from this screening. In toluene, anisole and DMC, the yield of SA and selectivity increase with decreasing CO pressure: under 5 bar, 38 to 51% of SA is obtained, with a selectivity up to 95% (Table , entries 1-6, 10-12). Noteworthy, the activity is completely suppressed at a CO pressure of 50 bar. In acetone, dioxane and EtOAc, the activity is enhanced at low pressure, but selectivity increases with pressure (Table , entries 13-15, 19-24). In DME, yields average around 70-75%, but higher selectivity is achieved at high CO pressure (Table , entries 16-18, 28-30). |
67a2399481d2151a02276602 | 5 | Considering its performances, DMF is not suitable for this reaction (less of 5% of yield, and very low selectivity, Table , entries 28-30). In MeCN and THF, the optimal reactivity is obtained under moderate pressure (15 bar, Table , entries 7-9, 25-27). a Conditions: PL (0.5 molβ§L -1 ) and mesitylene (internal standard, 10 mol%) with [Co2(CO)8] (5 mol%) in the solvent (2 mL), heated for 6 h under CO pressure. PL conversions and SA yields were measured by GC-MS analysis; the selectivity was calculated as the ratio between yield and conversion. Due to the uncertainty of the analysis method, conversions lower than the yields can be obtained, and in that case, the selectivity value was reported as >99%. Selectivity was not reported and noted "/" when the conversion was <1%. |
67a2399481d2151a02276602 | 6 | An additional experiment supports the claim that MeCN is a suitable solvent owing to more than just its solvation properties: when 5 vol% of MeCN is added to toluene, 57% of SA with 92% selectivity is obtained, a yield which is significantly higher than the one obtained in toluene alone in similar conditions (4%, Table To conclude, the trends observed are consistent with the mechanistic model proposed in Scheme 3, where the Lewis base may either be the solvent or the substrate, and where the disproportionation of the [Co2(CO)8] is a key step for the generation of an active species: no carbonylation activity is observed in the conditions where the disproportionation of the precursor [Co2(CO)8] is less favoured, i.e., in a noncoordinating solvent, and at high CO pressure (see Table , entries 3 and 6). |
67a2399481d2151a02276602 | 7 | Further assessment of the performances of the best catalytic system obtained so far was undertaken, i.e., MeCN/15 bar, starting with the effect of temperature (Table ). A higher temperature increased the activity of the catalytic system: 83% of SA was obtained after only 1 h at 110 Β°C (Table , entry 4), but after 3 h, the PL was entirely converted, while the yield stagnated at 87% (Table , entry 3). The decrease of the selectivity was even more exacerbated at 130 Β°C, to the point where the yield of SA reached barely 44% after 1 h (Table , entry 6), and 54% after 3 h (Table , entry 7). |
67a2399481d2151a02276602 | 8 | The selectivity of the carbonylation of PL to SA is then quite sensitive to the temperature. Interestingly, increasing the CO pressure prevented this selectivity drop: 95% of SA was afforded at 130 Β°C, after 3 h under 50 bar of CO (Table , entry 8). The latter result indicates the interdependency between pressure and temperature. Finally, some catalytic activity was observed at temperatures as low as 70 Β°C and 50 Β°C (82% yield of SA after 65 h and 51% yield of SA after 65 h, respectively. See S.I. for details). |
67a2399481d2151a02276602 | 9 | A lower catalyst loading slows down the reaction, but maintains the selectivity towards succinic anhydride (Figure ). 92% of SA was obtained after 16 h with 2.5 mol% of [Co2(CO)8], and 60% after 72 h with 1 mol%, with a high selectivity (for each experiment, selectivity >87%). Scaling-up experiments were successfully conducted in large-volume autoclaves, proving the applicability of this method up to the gram scale. An experiment with 5 mmol (360 mg) of PL afforded 90% of the desired product. When 10 mmol (720 mg) of PL were engaged in catalysis, 83% of SA was obtained after 8 h. A similar good yield was obtained within a few days when only 1 mol% of [Co2(CO)8] was introduced. High selectivity was achieved in each case (see S.I. for the detailed procedure). |
67a2399481d2151a02276602 | 10 | To extend the scope of this reaction, the carbonylation of Ξ²-butyrolactone (BL) to methylsuccinic anhydride (MeSA) was conducted (Table ). Our simple [Co2(CO)8]/MeCN system was unfortunately unable to match the performances of the well-defined catalysts developed by Coates and co-workers for the carbonylation of substituted lactones to cyclic anhydrides. Our previously optimised conditions for PL proved inefficient for BL (3% MeSA, Table , entry 1). Increasing the temperature to 110 Β°C improved the yield, but longer reaction times were needed to achieve complete conversion (Table , entries 2-4). The selectivity was however limited, and a maximum yield of 62% of BL was afforded after 72 h. Heating up to 130 Β°C decreased the selectivity further, similar to what was already observed for PL (Table , entry 5, and see Table ). Increasing the CO pressure improved the selectivity only on the shortterm (Table , entries 6-7), going from >99 to 63% after 16 and 72 h respectively. Combining high pressure and high temperature did not improve the performances of the system. In particular, contrary to what was observed with PL, a higher pressure did not compensate the selectivity loss caused by a high reaction temperature (Table , entries 8-9). This may be explained, at least in part, by a side-reaction of decarboxylation of the BL to form propylene and CO2, that were observed in the gas phase by GC when the reaction was run at temperatures higher than 130 Β°C (see S.I. for details). BL conversions and MeSA yields were measured by GC-MS analysis; the selectivity was calculated as the ratio between yield and conversion. Due to the uncertainty of the analysis method, conversions lower than the yields can be obtained, and in that case, the selectivity value was reported as >99%. Selectivity was not reported and noted "/" when the conversion and/or the yield were <1%. b Traces of glutaric anhydride (GA) detected (1-2%). |
67a2399481d2151a02276602 | 11 | We then turned our attention towards acrylic acid (AA), and isomer of PL and a platform chemical with millions of tons in global production volumes. Carbonylation of AA has been attempted by Tsuji et al. in their study on PL carbonylation in 1969. Even though their conditions enabled isomerisation of PL to AA, they showed that AA could not be carbonylated to SA. Similarly, Falbe and co-workers noted during their research on acrylamide carbonylation to succinimide that AA polymerized in the presence of [Co2(CO)8] under CO atmosphere (300 bar). In a recent study, our group was the first to achieve quantitative cyclizing carbonylation of AA to SA. 12 Capitalizing on our knowledge on this reaction and on the simple system developed herein, the same systematic screening of solvents and pressures already performed for PL was applied to AA and is reported in Table . Overall, the conversions, yields, and selectivities were lower with AA than with PL. The yields and selectivities were improved in general at higher pressure, and the best result was obtained in acetone under 50 bar of CO: 65% of SA, with 78% of selectivity (Table , entry 24). SA was not detected at all when DMF was used. |
67a2399481d2151a02276602 | 12 | To understand the lower activity and selectivity for the carbonylation of AA compared to PL, further analyses were run to check for side-products. After carbonylation runs, propionic acid could be detected in the liquid phase, but in too small amounts (<5%) to make up for the low mass balance recovered when taking only AA and SA into account (see S.I., section 4.1). After carbonylations in toluene, the formation of a purple slurry was also observed, which, after washing and drying under vacuum, rendered a lilac solid, [Co-AA] (see S.I., section 2.2). The IR bands of [Co-AA] suggest that it is a cobalt (II) species, with a structure of the type [Co II (acrylate)2]. The formation of [Co-AA] could thus consume at least 2 equivalents of AA per Co atom, which would explain the lower selectivity in favour of the carbonylation product (with 5 mol% of [Co2(CO)8], around 20% of AA may be lost). In addition, a lilac solid with similar IR bands was synthesized in less than 30 min by reacting an excess of AA with [Co2(CO)8] under argon (see S.I., section 2.3): the formation of [Co-AA] is thus quite fast comparatively to the duration of the carbonylation runs, and would be particularly favoured at low CO pressure. a This is consistent with the poor performances at 5 bar of the catalytic systems reported in Table . It is noteworthy that [Co-AA] is not an active complex for the carbonylation of AA. In other words, the substrate itself is prone to deactivating the precursor [Co2(CO)8] (or cobalt carbonyl species formed in-situ) by coordinating on the metal centre as an acrylate. This gives hints on the difficulty of the carbonylation of AA to SA. |
67a2399481d2151a02276602 | 13 | a [Co-AA] was isolated from a raw mixture in toluene, while the reaction between AA and [Co2(CO)8] under argon was conducted in hexane. Both solvent are non coordinating solvents. It may be expected that the formation of [Co-AA] would be slowed down (or even hindered), in presence of a coordinating solvent (or a stabilizing ligand, like a diphosphine in our previous study, cf. Nicolas & Cantat, ChemCatChem 2023, 15, e202300720). a Conditions: AA (0.5 molβ§L -1 ) and mesitylene (internal standard, 10 mol%) with [Co2(CO)8] (5 mol%) in the solvent (2 mL), heated for 6 h under CO pressure. AA conversions and SA yields measured by GC-MS analysis; selectivity calculated as the ratio between yield and conversion. n.d.: not determined. Because of overlap with other peaks on the GC trace, the conversion may not always be measured. |
67a2399481d2151a02276602 | 14 | Different levers were tested to achieve higher yields and selectivity for this carbonylation. Applying a higher temperature was counterproductive, and reduced the catalytic activity of the carbonylation (see S.I., section 2.1). In our previous study, 12 the addition of H2 in the gas phase favoured the carbonylation of AA to SA, so catalytic experiments were also run with syngas in the best performing solvents (Table ). Gratifyingly, under a pressure of 50 bar of CO/H2 (90:10), after 6 h of heating at 90 Β°C, an excellent yield of SA was obtained in MeCN (91%, Table , entry 1), and even a quantitative yield was measured in acetone (Table , entry 2). In acetone, lowering the total pressure from 50 to 15 bar reduced the selectivity to 77% (Table , entry 3). |
67a2399481d2151a02276602 | 15 | Lowering the CO partial pressure from 45 to 20 bar, while maintaining the H2 partial pressure constant at 5 bar, was also detrimental to the selectivity toward SA (Table , entry 4). A minimal CO pressure is thus needed, probably either for the carbonylation step to be favoured, or to stabilize the active species. The positive influence of the presence of H2 may be explained by the formation and stabilisation of cobalt hydride intermediates, which can be proposed as active species. 46 a Conditions: AA (0.5 molβ§L -1 ) and mesitylene (internal standard, 10 mol%) with [Co2(CO)8] (5 mol%) in the solvent (2 mL), heated for 6 h under pressure of desired gas mix. AA conversions and SA yields measured by GC-MS analysis; selectivity calculated as the ratio between yield and conversion. |
67a2399481d2151a02276602 | 16 | The catalytic system was applied to two related, yet more sterically hindered, compounds: crotonic acid (CA) and methacrylic acid (MA) (Table ). After the carbonylation of CA, two products may be expected: methylsuccinic anhydride (MeSA), and the 6-membered cyclic anhydride, glutaric anhydride (GA). Only MeSA is obtained when performing the reaction in MeCN under pure CO, and only in poor yields (Table , entries 1-3). In acetone, both anhydrides are formed (Table , entry 4-8). Moderate yield and conversion were obtained in acetone at 90 Β°C under CO/H2 pressure: 26% of cyclic anhydrides after 6 h, and 47% after 16 h (Table , entries 5 and 6). Increasing the temperature to 110 Β°C enhanced the activity, with a yield in cyclic anhydrides of 72% after 6 h, while the selectivity is maintained (80%, Table , entry 7, average of two runs). GA the major product when acetone is used as solvent, independently of the presence of H2 in the gas phase (Table , entry 4 vs. entries 5-8). This indicates that in acetone the isomerization of cobalt carbonyl intermediates (cobalt-alkyl complexes) is possible, consistent with previously reported results. Increasing the proportion of H2 in the gas phase from 10 vol% to 20 vol% was detrimental to the selectivity of the carbonylation, as it probably enhanced hydrogenation of the substrate (Table , entry 8). The carbonylation of MA to MeSA gave poorer results: only 7% of MeSA was detected after 6 h at 90 Β°C, whether under pure CO or CO/H2 (Table , entries 10-11). With the best set of conditions obtained so far (110 Β°C, 50 bar CO/H2) we could only enhance the amount of MeSA to 24% (Table , entry 12). Under all conditions presented here, a substantial amount of MA was hydrogenated to isobutyric acid. The competition between hydrogenation and carbonylation partly explains the poor activity of this substrate. Interestingly, when submitting CA to the best set of conditions (110 Β°C, 6 h, 50 bar CO/H2) in acetonitrile, similar conversion of CA is obtained but with reversed selectivity: MeSA is obtained with 53% yield, and GA is obtained with 25% yield (>99% selectivity towards anhydrides, 68:32 MeSA:GA ratio) (Table , entry 9). As previously reported, 15 the regioselectivity of the carbonylative cyclization of unsaturated acids is quite sensitive to slight modifications of the catalytic system. In our case, the modulation of the selectivity may be simply achieved by changing the reaction solvent. |
67a2399481d2151a02276602 | 17 | Herein we report the carbonylation of Ξ²-propiolactone and acrylic acid using [Co2(CO)8] as catalyst. For PL, we achieved quantitative yields and excellent selectivity in acetonitrile, under 15 bar of CO, 90 Β°C and 6 h reaction time. Scaling-up the reaction to 1 g batches does not undermine reactivity. Less toxic, recommended solvents are suitable for this reaction. Our catalytic system proves efficient for acrylic acid as well when H2 is added to the gas phase, in which case quantitative yields are obtained. Methylated variant of acrylic acid proved active under the depicted conditions, despite carbonylation proceeding with poorer performances. Better understanding of the reaction mechanism, of the catalyst deactivation pathways, and in-situ analysis of speciation species formed during [Co2(CO)8] solvent-induced disproportionation would be valuable to further improve this system and extend it to a wider variety of substrates. |
66cc1af4a4e53c4876a51c36 | 0 | Feed is vitally important in the livestock sector to sustain animal health and ensure the production of safe and high-quality products of animal origin . The production of livestock feed is under continuous pressure from food-feed competition, disruptions in feed ingredient supply chains, contamination episodes, variations in nutrient quality of feedstocks and demand for sustainable agricultural practices . Globally, feed production is responsible for an estimated 45% of the greenhouse gas (GHG) emissions of the livestock sector and uses 33% of the total arable land . The projected rise in the demand for animal products increases the urgency of drastically reducing the environmental footprint of livestock feed production . |
66cc1af4a4e53c4876a51c36 | 1 | Animal feed is produced by selecting and combining feed ingredients to create nutritionally optimal mixtures that meet market demands . Increasing sustainability in feed production requires finding a trade-off between several objectives, including economic profitability, socially acceptable practices, and reduced environmental footprints . However, current production primarily focuses on minimizing costs within a quality range ; impacts on the environment are typically not considered. |
66cc1af4a4e53c4876a51c36 | 2 | Several studies have shown that environmental footprints may vary greatly across feed ingredients, due to differences in cultivation, processing, and geographical origin . Castonguay et al. recently showed that trade-offs between environmental impacts and monetary costs may improve feed sustainability within global beef production . Although the environmental impacts of livestock feeds are increasingly assessed through life cycle assessment (LCA) , these impacts are used mainly for regulatory purposes and not integrated into feed optimization. Transparent integration of LCA results in real-time production is therefore necessary to enable decision-makers to design more environmentally sustainable and high-quality feeds. |
66cc1af4a4e53c4876a51c36 | 3 | Ensuring feed quality in industrial agriculture requires detailed chemical knowledge of the nutritional value of the feed ingredients . Ideally, such information is obtained quickly and noninvasively during production and is available at product release. Most laboratory analyses are very costly and too timeconsuming for use in real time . Therefore, to determine the nutritional composition of ingredients, feed manufacturers often resort to off-line laboratory measurements through occasional wet analysis or available databases . Process analytical technologies (PATs) based on near-infrared spectroscopy (NIRS) fingerprinting are becoming increasingly available to accurately identify feed ingredients according to their nutritional content in rapid, non-destructive and cost-effective ways . NIRS-based information is often available in (near) real time, which allows for controlling and improving the ongoing process to enable quality assurance even during processing . NIRS fingerprints, however, provide more than just nutritional value information; they can also be used to predict several parameters, such as geographical origin , that are closely related to the environmental footprint of the ingredients. However, such a link between NIRS and quantifying, controlling and improving the environmental sustainability of feed production has not yet been explored. |
66cc1af4a4e53c4876a51c36 | 4 | In this study, we show how NIRS and LCA can be combined to integrate environmental impacts in a transparent optimization framework that can be used to formulate feed mixtures that meet the desired livestock feed quality while minimizing monetary and environmental costs. Predictive machine learning models can be used to link chemical information to relevant feed ingredients properties, such as ingredient-specific nutritional compositions and geographical origin . Having both quality information and origin as intrinsic properties of the feed ingredients allows us to optimize the performance of industrial feed production in economic, environmental and quality terms in real time. |
66cc1af4a4e53c4876a51c36 | 5 | The focus of this study is on feed optimization for pigs and broilers; these feeds are among the top feeds produced in Europe . Optimizing feeds for price and environmental footprint requires three main steps (Fig. ). First, we predict the environmental footprints of feed ingredients by combining multivariate classification and global spatially explicit LCA. In classification, NIRS fingerprints are used to predict the country of origin of the ingredients. These predictions are the basis on which the LCA determines the environmental footprints of crop production. We focus on land stress (i.e., the effect of land occupation and transformation) and climate change, as they are the two key environmental impacts of agricultural production . In our study, the geographical origin is directly linked to the location of the factory where the spectra were measured. This specification is included both in the classification model and in the LCA analysis by considering the transport from the country of origin to the factory location and, if needed, processing into miscible feed ingredients. Second, we predict accurate nutritional compositions of feed ingredients from NIRS fingerprints by using multivariate regression. By incorporating the nutritional composition, environmental footprint, and price of feed ingredients into a multi-objective optimization, we obtain quality-compliant mixture ratios that minimize the environmental and monetary costs of feed production in real time. The optimization step produces Pareto fronts that reveal the effect of nutritional variation on the final feed costs. The technique for order preference by similarity to ideal solution (TOPSIS) then provides trade-off mixture ratios for each front that leverage optimality for the environmental footprint and monetary price under the constraint of quality compliance. Fig. . Workflow of the approach presented in this study. The approach uses the near-infrared (NIR) spectra of eight feed ingredients used in feed production, which were harvested in six different countries of origin and whose spectra were measured at four factory locations. After measurement, the feed ingredients were transported to a common production location where they were mixed into feed. The exact production location was unknown and hence excluded from the study. NIR spectra were assigned to 18 different classes, characterized by feed ingredient, country of origin and measurement location. We predicted environmental footprints by combining multivariate classification with LCA and nutritional content via multivariate regression of each class of feed ingredients from the spectra. The predicted nutritional content was used for 1000 simulations with varying nutritional compositions for each class of feed ingredients. These simulations, together with the predicted environmental footprints, the target feed, and the price of feed ingredients, comprising commodity price and transport cost, became the input for the multi-objective optimization framework. This framework aims to find, for each simulation, the trade-off mixture ratio that minimizes environmental footprints and monetary costs while meeting the quality standards. The figure was created using an existing world map . |
66cc1af4a4e53c4876a51c36 | 6 | NIRS fingerprints alone were able to discriminate feed ingredient samples according to their country of origin with high prediction accuracy (Table and Fig. , balanced accuracy = 0.94). These predictions were successfully linked to ingredient-and country-specific environmental footprints, i.e., land stress and climate change (Figs. ). NIRS fingerprints could also predict nutritional variations among and within ingredient groups with generally high accuracy (Fig. , root mean squared error = 1.7-5.5 g/kg), with varying performances depending on the ingredient and nutrient analyzed and the sample size (Figs. ). These predictions served as the basis for thousands of simulations that captured the nutritional variability among and within feed ingredients during feed optimization. |
66cc1af4a4e53c4876a51c36 | 7 | Including nutritional variability among and within feed ingredients in multi-objective optimization leads to mixture ratios that always meet the quality constraints within each Pareto front, for each target feed and environmental indicator (Fig. ). This is essential because using occasional off-line measurements as an indicator of the average ingredient nutritional composition may render the produced feed unsuitable for meeting the animal's nutritional requirements. For example, a mixture ratio optimized from accurate nutritional compositions of feed ingredients was compared with that optimized from off-line measurements (Fig. ). The two optimizations selected similar feed ingredients, but in different ratios and from different countries of origin. The extent to which nutritional requirements are not met when using off-line measurements was dependent on the target feeds and environmental indicators. For instance, broilers need more protein and fat than do pigs (Table ); thus, there is a higher preference for soybean than for barley in broiler feed (Fig. ). Offline determination of ingredient quality generally failed to meet the protein, fat, and starch requirements for pigs or the fat and ash requirements for broilers (Fig. ). The nutritional requirements for pigs were not met for 92% of the simulations, with a median sum of absolute deviations of 8.3 g/kg. For broilers, 74-84% of the simulations did not meet the requirements, with a median sum of absolute deviations of 1.8-3.1 g/kg for land stress and climate change, respectively. |
66cc1af4a4e53c4876a51c36 | 8 | Our findings show that on-line determination of feed ingredient quality is necessary for consistent quality compliance in continuous production. NIRS holds promise for predicting feed compositions that consistently meet the quality in real time and is a viable alternative to more time-and cost-consuming traditional methods based on wet chemical analysis. The advantages of NIRS are further enhanced by its ability to readily authenticate the origin of feed ingredients. Feed authentication is essential for ensuring correct labeling and safety in production and for increasing transparency, traceability and accountability throughout the supply chain 36 . Accurate origin determination ultimately ensures transparent environmental assessment during production, allowing decisionmakers to include environmental considerations in feed optimization. |
66cc1af4a4e53c4876a51c36 | 9 | Considering different environmental footprints in feed optimization is crucial for comprehensively evaluating the environmental impact of the produced feed. Our analysis revealed that the choice of environmental indicator, e.g., the impact of land stress on biodiversity or climate change, results in the selection of different feed ingredients from distinct countries of origin (Fig. ). Specifically, the impacts of land stress on biodiversity are decisive for origin selection, particularly for barley and wheat for pig feed (Fig. ), and for soybean for broiler feed (Fig. ). While barley from Great Britain was primarily chosen for optimizing pig feed when considering land stress, barley from Ukraine was also selected as a viable option when considering climate change. The climate change impacts for barley are similar for both countries; however, the impacts of land stress on biodiversity are roughly seven times greater for barley from Ukraine than for that from Great Britain (Figs. ). For broiler feed, soybean was selected more often from Canada when optimizing for land stress impacts due to the roughly two times greater impacts of land stress on biodiversity for Ukrainian soybean. |
66cc1af4a4e53c4876a51c36 | 10 | Remarkably, the availability of environmental impact information during optimization allows feed ingredients to be selected from those with similar prices and nutritional compositions, while ensuring the lowest environmental footprint for the considered feed. For instance, in our framework, corn from Brazil was rarely selected due to both the associated high impact of land stress on biodiversity and climate change (Fig. ). Despite having similar prices, the footprint for land stress is nine times greater and for climate change is three times greater for corn harvested in Brazil than for that harvested in Ukraine. Hence, due to its lower environmental impact, Ukrainian corn is a more profitable choice for both environmental and monetary costs than Brazilian corn (Fig. ). |
66cc1af4a4e53c4876a51c36 | 11 | Our findings show the importance of including environmental impacts in selecting feed ingredients for production and suggest that various footprints should be considered to avoid burden shifting, e.g., when a mixture ratio with low climate change but high land stress impacts is selected. For a more comprehensive evaluation, uncertainty in the footprint calculations may also be considered. For example, the effect of including the loss of carbon in agricultural land compared to natural vegetation was also evaluated (Figs. ). Including carbon loss from land-use change resulted in feed ingredients being selected more often from certain origins for climate change (e.g., rapeseed meal from Germany instead of Ukraine). However, when optimizing for land stress, this inclusion did not result in substantially different ingredient selection (Figs. ). |
66cc1af4a4e53c4876a51c36 | 12 | The extent to which footprint reductions are possible by considering environmental costs during optimization can be understood by evaluating the obtained trade-off mixture ratios against feed ingredient mixtures that minimize only the feed price within each Pareto front (Fig. ). Fig. shows that accounting for trade-offs between environmental and monetary costs results in relatively large reductions in the environmental footprint at only marginally increased feed prices. The degree to which this occurs varies depending on the target feed and environmental indicator. |
66cc1af4a4e53c4876a51c36 | 13 | The largest footprint reductions were observed when optimizing for the impacts of land stress on biodiversity loss, with median reductions of 39% and 34% against a median increased price of 1.1% and 2.4% for pig and broiler feed, respectively (Fig. ). Optimizing for climate change resulted in lower median environmental reductions of 5.7% and 3.3%, with lower median price increases of 0.82% and 0.92% for pigs and broilers, respectively (Fig. ). Compared with climate change, land stress impacts showed more variance within and among the countries of origin (Figs. ). For this reason, greater potential footprint reductions are expected with land stress than with climate change. The optimization of pig feed resulted in a larger footprint reduction compared to that of broiler feed. This may be attributed to the higher protein and fat requirements for broilers than for pigs (Table ). |
66cc1af4a4e53c4876a51c36 | 14 | Our findings reveal that factoring environmental costs into optimization is essential for increasing the environmental sustainability of feed production. Specifically, the large observed reductions in land stress impacts suggest that including this impact category in optimization is crucial for reducing the impact of livestock feed production on biodiversity loss. This finding is remarkable considering that the impacts on biodiversity are often underappreciated and unaccounted for in industrial livestock systems , even compared to the more frequently estimated carbon emissions . Making these estimates available through combining NIRS and LCA in feed optimization therefore offers the opportunity for multifaceted environmental value creation in the livestock business model. |
66cc1af4a4e53c4876a51c36 | 15 | By revealing the -as yet hidden -environmental costs during feed optimization, our approach provides decision-makers with an understanding of sustainability as a quantifiable property that can be controlled in real-time production, thus promoting increased environmental awareness and more responsible production patterns. By extracting valuable information on ingredient quality, geographical origin and other relevant attributes, such as ingredient shelf life, chemical hazards, and agronomic practices , NIRS data can be used to control and optimize industrial processes towards increased safety and sustainability with consistent quality. Merging NIRS technology and LCA therefore has high potential for reducing environmental impacts throughout the processing and manufacturing industry, especially when the effects of process control on aspects such as energy usage, pollution and feedstock use are transparent. |
66cc1af4a4e53c4876a51c36 | 16 | Incorporating ad hoc spectral pre-processing strategies and a sufficiently large number of samples is necessary to cover the high variability observed in industrial processes due to measurement changes, seasonal variability and feedstock changes. An insufficient number of samples can increase the possibility of spectral artefacts interfering with NIRS fingerprints , thereby reducing the predictive power when modeling certain ingredients and nutrients. In our study, this was noted, e.g., for predicted fiber from soybean (Fig. ). This finding emphasizes the need to include large sample sizes in industrial settings. Such operational expenditures in model building and maintenance are needed to attain the ability to predict valuable process information. |
66cc1af4a4e53c4876a51c36 | 17 | Expanding the current coverage of the study to include more constraints and ingredients, many of which have been studied by NIRS , is possible with our approach. To create a more diverse and variable ingredient portfolio, the inclusion of the local availability of ingredients at the production site could be added as a model constraint; however, such information was unavailable at the time of analysis. The feed ratios shown in this study are thus possible only if comparable ingredients are available and if they are produced in a sufficiently large amount to meet the demand. Analogously, our approach may better quantify the footprint and thereby further optimize it through trade-off mixture ratios, with greater diversification of ingredient provenances. Data harmonization from different measurement locations does require robust analytical quality control, such that the difference in measurement location can be unambiguously distinguished from the geographical origin for all the feed ingredients. Greater transparency could be achieved through a minor addition in the data collection to integrate more feed ingredients from different origins for each location (e.g., as for sunflower meal, Table , Fig. ). Furthermore, increasing the geographical resolution of the LCA from country to region would better include regional agricultural practices involved in crop cultivation, which is particularly relevant for large and heterogeneous countries such as Brazil and Ukraine. |
66cc1af4a4e53c4876a51c36 | 18 | Combining environmental and monetary goals in feed optimization offers the opportunity to retrospectively identify those ingredients or origins that have never or hardly ever been selected due to their costs and/or nutritional compositions. These spectra can ultimately be used to develop procurement guidelines regarding which ingredients enable environmentally and economically sustainable production and which are so seldomly selected that they may be excluded from purchase. |
66cc1af4a4e53c4876a51c36 | 19 | An essential prerequisite for using this approach in procurement is the integration of the variabilities and uncertainties in the international commodities market: ingredient pricing will vary greatly, yet it may be integrated as a source of variability in addition to nutritional quality and geographical origin. This addition would extend the scope of the proposed approach from a process control advisory tool to the procurement stages of the value chain. |
66cc1af4a4e53c4876a51c36 | 20 | Reporting environmental footprints is becoming equally important as part of traditional financial reporting for accessing feed markets in the European Union due to directives such as the Corporate Sustainability Reporting Directive (CSRD) . In the future, large footprint reductions may be further encouraged by initiatives such as true pricing, environmental impact labeling, green public procurement or carbon pricing . Environmental impact assessment is, however, a resourceintensive task for every company, especially for small and medium-sized enterprises (SMEs) . Our approach enables the repurposing of the required sustainability data on feed ingredients, which are generally available at the time of processing, for active value creation through feed optimization, thereby closing the gap between real-time operational data and value-driven managerial decisions towards environmentally sustainable choices that are also economically sound. |
66cc1af4a4e53c4876a51c36 | 21 | We proposed a modeling framework that combines NIRS and LCA to improve the environmental sustainability of feed production. Our framework overcomes the drawbacks of seasonal and other variability in agricultural ingredients when designing feeds, as it enables the optimization of mixture ratios for ingredients under real-time variability. Additional goals, such as social sustainability or customer demand, may be further implemented. The approach presented here may be ultimately leveraged for diverse commodities, including food and other biobased commodities, providing a unique opportunity to increase sustainability throughout the agri-food system. |
66cc1af4a4e53c4876a51c36 | 22 | The key idea of our approach is to combine NIRS and LCA in an optimization framework to find optimal mixtures of feed ingredients that minimize the environmental and monetary costs of feed production while meeting the quality requirements. Fig. in the Supplemental Information shows a detailed workflow of the proposed strategy, which consists of three main steps. In the first step, we combine NIRS fingerprints and life impact assessment of feed ingredients in a classification model that allows predicting the ingredient environmental footprints. In the second step, we employ NIRS to predict accurate nutritional compositions of feed ingredients. The predicted information is the input, with the ingredient price, of a multi-objective optimization that aims at finding the optimal ingredient mixture ratios that allow for achieving trade-offs between environmental and monetary costs while meeting the quality standards. |
66cc1af4a4e53c4876a51c36 | 23 | The study dataset consists of 863 near-infrared (NIR) spectra of eight different feed ingredients, namely barley, corn, rapeseed meal, soybean expeller, soybean meal, soybean (whole bean), sunflower meal, and wheat, which are employed to obtain two compound feeds: pig feed and broiler feed. For all the spectra, reference nutritional values were obtained with reference methods for wetchemical quality analysis from accredited laboratories. The considered feed ingredients were harvested from six countries of origin and transported to factories located in four different countries, where the spectra were measured. After measurement, the ingredients were transported to a common production location where they were mixed into feed products. The exact production location was unknown and hence excluded in the study. According to this specification, the NIR spectra belong to 18 classes, characterized by feed ingredient, country of origin and measurement location, as specified in Table (Supplemental Information). Fig. shows, as an example, representative NIR spectra for each class. |
66cc1af4a4e53c4876a51c36 | 24 | Predicting the nutritional composition and environmental footprint from NIR spectra requires employing multivariate chemometric techniques to remove spectral artefacts and enhance the model's predictive accuracy . Finding the appropriate techniques enables testing the possibility of employing NIRS in (near) real-time feed optimization. Chemometric prediction aims at maximizing the relationship between the spectral data matrix ππ and the response to predict π²π²; this can be achieved by selecting the optimal pre-processing technique that extracts the information in the data matrix ππ which is relevant to the response . We employed a classification approach to assign each feed ingredient to the respective country of origin, predicting the environmental impact when associated with LCA. In this study, the country of origin was related to the location of the factory where the spectra were measured: this information was included both in the classification model and in the LCA. |
66cc1af4a4e53c4876a51c36 | 25 | Spectra pre-processing consists of removing unwanted variations and artefacts that hinder relevant information in the raw NIR spectra . Selecting the appropriate pre-processing strategy is crucial to enhance the predictive power of the chemometric model; however, this procedure may be timeconsuming and subjective . We therefore adopted a supervised pre-processing selection strategy based on exhaustive search , similar to that proposed by Gerretzen et al. . This strategy enables testing selected pre-processing techniques suitable for NIRS in combination with selected predictive estimators. We employed similar pre-processing techniques for regression and classification, including baseline correction, multiplicative scatter correction, smoothing, and variable scaling. Details on the selected techniques are provided in the Supplemental Information (Table -S4). We selected the optimal pre-processing techniques in cross-validation by minimizing the root mean squared error (π
π
π
π
π
π
π
π
) in regression and maximizing the weighted balanced accuracy (π€π€π€π€π€π€π€π€π€π€) in classification. We evaluated the model predictive ability on the test set considering π
π
π
π
π
π
π
π
for regression, and balanced accuracy (π€π€π€π€π€π€π€π€) for classification. We here report a short description of these metrics, referring the reader to the Supplemental Information for a more detailed explanation. |
66cc1af4a4e53c4876a51c36 | 26 | We developed a classification approach to correctly predict the environmental footprints of feed ingredients while penalizing the misclassification of the classes with the highest environmental footprint. We considered linear discriminant analysis (LDA) as a classifier, which is a well-established method in chemometrics to analyze spectral data . We computed a classification model for each considered environmental indicator (i.e., climate change and land stress with and without including the effect of land-use change, defined in equations ( ), (10) and in equations ( ), ( ) in the Supplemental Information), to discriminate feed ingredients coming from different countries of origin and measured in different locations. Accurate predictions (i.e. π€π€π€π€π€π€π€π€ close to 1) indicate that the models can be associated with life cycle impact assessment to predict the environmental footprint of feed ingredients. |
66cc1af4a4e53c4876a51c36 | 27 | The environmental impact, expressed as land stress impacts on biodiversity (potentially disappeared fraction year, PDFβ’yr) and climate change (kg CO2-eq), was calculated with life cycle assessment. For each indicator, two scenarios were calculated; with and without carbon stock loss due to land-use change (LUC). The carbon stock loss includes the initial carbon loss when land is transformed into agricultural land and the lost sequestration capacity of agricultural land compared to natural vegetation (i.e. foregone sequestration) . These two scenarios were investigated because it was not known from the used database for how long agricultural land already existed and hence how much of this transformation effect should be attributed to crop production. The scenario without LUC emissions represents the situation where the area was already used as agricultural land and hence the effect of transforming land from agricultural to agricultural is negligible. The scenario with LUC emissions represents the situation where natural vegetation is transformed into agricultural land with an evaluation period of 30 years reflecting a typical plantation lifetime and is used in the main results of the article. The formulas used to calculate the impact excluding land-use change can be found in the Supplemental Information. |
66cc1af4a4e53c4876a51c36 | 28 | where π€π€π€π€ πΏπΏπΏπΏπΆπΆ,ππ represents the total impact on climate change including LUC emissions for each class ππ (in kg CO2-eq./ton feed ingredient), which is specified by feed ingredient β, grown in origin country ππ and transported to the location country ππ where the NIR spectra were measured. π€π€πΆπΆ πΆπΆπΆπΆ represents the climate change characterization factor used to express the emissions in kg CO2-eq. π
π
πΈπΈ πππΆπΆ,ππ are the emissions from the supply chain (in ton feed ingredient) for each feed ingredient β, country of origin ππ and transport to location ππ, belonging to class ππ. The supply chain includes the material and energy requirements for agricultural practices and processing into animal feed ingredient and transport to the measurement location. π
π
πΈπΈ πΏπΏπΏπΏπΆπΆ,β,ππ are the land-use change emissions over a period of 30 years, for each feed ingredient β and location in 30x30 arcminute raster ππ. ππ ππ represents the maximum grid level for class ππ. ππ πΆπΆπΆπΆ,πΏπΏπΏπΏπΆπΆ is the vector containing the average impact on climate change in kg CO2eq/ton feed ingredient including LUC emissions for each class ππ. πΊπΊ is the total number of classes. |
66cc1af4a4e53c4876a51c36 | 29 | Supply chain emissions, including crop cultivation, harvesting and pre-processing into feed ingredients and transport to production location were based on background processes from Agri-footprint v6 and processed in SimaPro 9.4.0.2 . The processes were adjusted by removing the land-use, to avoid double counting with LUC emissions. The impact for climate change was calculated with ReCiPe 2016, midpoint, H . The geographical resolution of Agri-footprint processes is per country, in line with the geographical resolution of the NIR spectra. The economic allocation of side products obtained during pre-processing was based on the economic allocation used in Agri-footprint v6. |
66cc1af4a4e53c4876a51c36 | 30 | Land-use change (LUC) emissions from changing carbon stocks were estimated based on the LPJml global vegetation and hydrological model , coupled with the IMAGE integrated assessment model . Following the approach used by Hanssen et al. , carbon stocks after 30 years of growing feed crops were compared to carbon stocks under a simulated counterfactual of natural vegetation growth in the same location. The difference in carbon stocks was assumed to be emitted to the atmosphere as CO2. These emissions were allocated to the cumulative feed crop production over 30 years, which was determined per location using crop yield data in MapSpam with a 5-minute resolution for 2010 , as shown in equation ( ). |
66cc1af4a4e53c4876a51c36 | 31 | where π
π
πΈπΈ πΏπΏπΏπΏπΆπΆ,ππ,ππ are the LUC emissions of feed crop production (in kg CO2-eq./ton feed ingredient) for each feed ingredient β in gridcel ππ. βπ€π€ is the difference between carbon stocks under feed crop cultivation and natural vegetation (in tonne C) for crop β and origin country ππ; ππ is the molar mass ratio between CO2 and C of 44.01/12.01. ππ is the feed crop yield (in ton feed ingredient/year); and π‘π‘ is the 30-year time period considered (in years) . |
66cc1af4a4e53c4876a51c36 | 32 | Where πΏπΏπ
π
πΏπΏπΏπΏπΆπΆ,ππ represents the total impact on land stress including LUC emissions for each class ππ (in PDFβ’yr per ton feed ingredient), which is specified by feed ingredient β, grown in origin country ππ and transported to location country ππ. π€π€πΆπΆ πΈπΈπΈπΈ represents the ecosystem damage characterization factor used to express the emissions from transport to from country of origin ππ to measurement location ππ (π
π
πΈπΈ π‘π‘πππ‘π‘πππ‘π‘π‘π‘π‘π‘πππ‘π‘,ππ ) in PDFβ’yr. π€π€πΆπΆ π‘π‘ππππ,π§π§ and π€π€πΆπΆ π‘π‘πππ‘π‘πππ‘π‘,π§π§ are the ecosystem specific characterization factors for land occupation and land transformation (in PDFβ’yr/m 2 ) for each ecoregion π§π§, respectively. ππ πΏπΏππ,πΏπΏπΏπΏπΆπΆ is the vector containing the average impact on land stress including LUC emissions for each class ππ (in PDFβ’yr/ton feed ingredient). The yield was from MapSpam and the characterization factors for occupation and transformation were from Chaudhary et al. . These characterization factors are based on the different ecoregions across the world . For each country of origin, the ecoregions were identified together with their corresponding characterization factor. Depending on the areas where agricultural practices took place, based on yield, the effect of land occupation and transformation was calculated per PDFβ’yr/ton feed ingredient. A time of 30 years and economic allocation to by-products was used to calculate the impact on land stress per ton feed ingredient, as was also done with climate change. All generated maps were added in π
π
, using the lowest map resolution (i.e., on grid cell level ππ). |
66cc1af4a4e53c4876a51c36 | 33 | To predict the nutritional composition of feed ingredients, we compared the performances of two different regressors: partial least squares (PLS) and random forest regression. PLS regression is commonly used for NIR spectra analyses due to its ability to handle numerous correlated spectral features . PLS identifies a set of new variables (latent variables, LVs), and finds the LVs' direction that explains the highest variance in the ππ matrix and is most correlated to the response vector π²π² . We selected the number of LVs employed by the model with internal cross-validation to avoid overfitting. |
66cc1af4a4e53c4876a51c36 | 34 | Random forest regression has been recently demonstrated to be a powerful technique in multivariate calibration to deal with spectral complexity and possible non-linearity . Therefore, we also tested this estimator to evaluate whether the final predictive accuracy would have been improved compared to the most commonly used PLS. Training a random forest model required tuning the model hyperparameters. We selected the optimal hyperparameters with genetic algorithms in crossvalidation. PLS and random forest regression models were run independently for each feed ingredient and nutritional value, within the pre-processing optimization framework. Their performance was evaluated in cross-validation in combination with the tested pre-processing strategies: the combination with the lowest π
π
π
π
π
π
π
π
in cross-validation was selected as the most optimal to predict the ingredients' nutritional composition. Details about the optimized hyperparameters and crossvalidation schemes are provided in the Supplemental Information (Table -S6). |
66cc1af4a4e53c4876a51c36 | 35 | We employed multi-objective optimization to find the optimal mixture ratio of feed ingredients that minimizes the environmental and monetary costs of animal feed production while meeting the feed quality requirements. We employed the weighted sum method to deal with the multi-objective Nominal values and maximum allowed variation are reported in Table of the Supplemental Information. A maximum ππ value was applied to restrict the ratio of each optimization class in our study, thereby ensuring the diversity of ingredients in the feed. |
66cc1af4a4e53c4876a51c36 | 36 | To estimate π©π©, we retrieved the feed ingredients prices from the international market with the Food Prices Monitoring and Analysis (FPMA) Tool 77 of the Food and Agriculture Organization (FAO) of the United Nations. We retrieved commodity prices for one selected month, March 2022, which was the most recent month at the time of the analysis. Prices were not available for each feed ingredient and country of origin considered in our study: Supplemental Information reports detailed information on all considered assumptions for price estimation (equations (20)-( ), Supplemental Information). |
66cc1af4a4e53c4876a51c36 | 37 | Constraints in multi-objective optimization (equation (11), equations ( )-( ) in the Supplemental Information) ensure that the optimized feed ingredient mixture ratios meet the nutritional constraint of the target feed. Predicting accurate nutritional compositions from NIRS fingerprints entails having diverse compositions due to nutritional variation. In our dataset, the number of samples for each feed ingredient class was too small to reproduce all the nutritional variability that might be observed in industrial production. Hence, we employed Monte Carlo sampling to obtain 1000 simulations with varying nutritional compositions among and within feed ingredients groups. Monte Carlo is a sampling-based methodology that allows us to generate random scenarios based on the probability distributions of the predicted nutrients to observe the effect of nutritional variability on the final optimization. This methodology is extensively used in stochastic optimization in quantitative applications such as in science, engineering and economics, to optimize the process performances explicitly accounting for uncertainty . We utilized the predicted nutritional compositions for each feed ingredient to build multivariate t-distributions to sample from. We considered multivariate tdistributions to account for the correlation among nutrients, ensuring that the simulated formulations respect nutritional constraints. A random error, derived from univariate normal distributions of π
π
π
π
π
π
π
π
values, was finally added to the sampled nutritional values to account for the model error in prediction. The optimization model was run independently for each simulation in a stochastic optimization framework. |
66cc1af4a4e53c4876a51c36 | 38 | Varying the weights π€π€ ππ and π€π€ π‘π‘ for each simulation allows for obtaining stochastic Pareto fronts of feasible mixture ratios. To select the optimal trade-off mixture ratio, we employed the technique for order preference by similarity to ideal solution (TOPSIS) , weighting the objectives with the Shannon's entropy method. TOPSIS selects the optimal trade-off by finding alternatives with the shortest distance from the positive ideal solution and with the longest distance from the negative ideal solution . TOPSIS is widely used in many research areas, such as supply chain management, manufacturing systems, or energy management, for its simplicity in concept and application, and for being able to find trade-off solutions among several objectives . |
66cc1af4a4e53c4876a51c36 | 39 | We performed a separate multi-objective optimization (equation (11)) to evaluate quality deviations resulting from utilizing nutritional values from occasional off-line measurements. To compensate for the fact that these values were not available in our study, we considered average compositions of feed ingredients from available measurements. Nutritional compositions for each of the 863 samples measured with NIRS were available from chemical wet analyses: we estimated average nutritional values for each ingredient to replicate the situation where measurements are not performed for each of the incoming ingredient batches. Since this optimization does not consider nutritional variability within feed ingredient groups, only one Pareto front is obtained; we selected the optimal trade-off mixture ratio on this Pareto front with TOPSIS. We used the selected mixture ratio to calculate the deviation in quality considering off-line measurements for each of the 1000 simulations, as detailed in the Supplemental Information (equation ( ), Supplemental Information). |
66cc1af4a4e53c4876a51c36 | 40 | πππππππ‘π‘πππππππππ‘π‘ πππππππ π π€π€π‘π‘ππππππ (%) = (Python API) for multi-objective optimization, sklearn-genetic-opt 87 for hyperparameter optimization for random forest, pymcdm for TOPSIS. Matplotlib 89 and seaborn 90 libraries were employed for visualization. SimaPro 9.4.0.2 with impact assessment method ReCiPe2016 (H) was used to calculate the environmental impact of the supply chain of the feed classes. Rstudio (2022.02.2+485) was used to calculate the total impact of the classes per grid cell in the country of origin, using the following packages: Terra, sp, readxl and sf. |
65e0f5d466c1381729e12e1a | 0 | Materials science and engineering play a pivotal role in fostering prosperity, enhancing lifestyle, and advancing the development of environmentally sustainable technologies. The field is profoundly interdisciplinary, encompassing physics, chemistry, biology, mathematics, and computer science. It addresses intriguing inquiries such as: Are new semiconductors with increased efficiencies for solar modules available, and can they surpass the flexibility of materials under discussion today? Which catalyst materials would be optimal for a specific chemical reaction, e.g., splitting of water to produce hydrogen? What combination of alloying constituents imparts unique bending strength, extreme hardness, and corrosion-resistant properties of metallic alloys? Furthermore, how should a surface be coated to attain the utmost thermal protection, e.g., for improving the energy efficiency of turbines? |
65e0f5d466c1381729e12e1a | 1 | In recent years, materials science has entered an era marked by an unprecedented surge in data, stemming from both experiments and computations. This influx has surpassed the capacities of traditional methods to manage these data effectively. The so-called 4 V challenge is clearly becoming eminent. It can be summarized as follows: |
65e0f5d466c1381729e12e1a | 2 | this way the high intricacy of several co-and counter-acting processes is considered. It reflects that big data reveal correlations and dependencies that cannot be seen when studying small data sets, and, in difference to the past, it is accepted that a detailed causal explanation is not always possible. Causal inference, when possible, may not necessarily be expressed in terms of a simple, closed analytic equation or an insightful, simple physical model. We will get back to this point below. |
65e0f5d466c1381729e12e1a | 3 | Let us briefly recall the first three research paradigms. Experimental research, the initial paradigm, dates back to the Stone Age and developed first metallurgical techniques in the Copper and Bronze Ages. The control of fire marked a significant breakthrough. In the late 16th century, analytical equations became the central instrument for describing physical relationships, establishing theoretical physics as the second paradigm. The change was led by Brahe, Galileo, Kepler, and Newton. The next chapter started in the 1950s, when electronic-structure theory for solids , the Monte Carlo method , and molecular dynamics were introduced. These developments enabled computer-based studies and analyses of thermodynamics and statistical mechanics on the one hand and of quantum mechanical properties of solids and liquids on the other hand. They define the beginning of computational materials science, what is nowadays considered the third paradigm of materials research. |
65e0f5d466c1381729e12e1a | 4 | Today, big data and AI revolutionize various aspects of life, including materials science. To navigate this 4th paradigm successfully, researchers must embrace new research concepts, and this Roadmap on Data-Centric Materials Science provides a summary of ideas for exploring the data-centric landscape of materials science and engineering. As materials science is a very broad and interdisciplinary field, only some areas of this landscape can be covered. However, we trust that the addressed examples explicate many of the basic concepts and that they can be helpful also for other topics than those addressed explicitly in the different contributions. |
65e0f5d466c1381729e12e1a | 5 | Science is and always has been based on data, but the terms 'data-centric' and the '4th paradigm' of materials research signifies a transformative shift towards retrieving and managing vast data collections, digital repositories, and innovative data analytics methods. The integration of AI and its subset ML has become pivotal in addressing all these challenges. In the data analysis, we are looking for structures and patterns in the data. As mentioned above, materials properties and function are often not just governed by one single process but there are many. Some drive, others just facilitate, and again others hinder the materials property or function of interest. The interplay of various processes is very intricate. In analogy to genes in biology, we discuss elemental materials features (e.g., electronegativity of the atoms that build the material) that correlate with the materials property of interest. The primary features that connect with of a certain materials property or function are called the relevant 'materials genes'. Together with environmental parameters (e.g., temperature), they determine (in a statistical sense) the material's property and function. In recent years, major advances in ML and computing power, in particular the advance of hardware accelerators like GPUs, have enabled deep neural networks, with billions of trainable parameters, leading to breakthroughs in computer vision and natural language processing. A key strength of deep learning is that it addresses not only the objective for classification, regression or other tasks, but also the learning of how to represent the input data itself. Thus, there is no need for explicit feature modeling: images can be ingested as arrays of pixels, and text documents are simply sequences of tokens. High-level structures in visual or textual contents, like people interacting with objects in a scene or argumentation and sentiments in a conversation, are automatically discovered and latently captured by the deep neural network itself. |
65e0f5d466c1381729e12e1a | 6 | Obviously, this predictive methodology of deep learning has potential in many application areas, conceivably including materials science and particularly microscopy images. However, the success of deep learning builds on various assumptions, including the availability of large training data with 'independent and identically distributed' (iid) samples. These assumptions are not easily satisfied for materials data, and feature engineering and physics-based modeling is still indispensable. [e.g. At its core, ML operates as an interpolation technique, fitting and connecting the data upon which it is trained, applying regularization (or smoothening) to achieve generalization. The ML model excels in exploiting the data space covered by the training data but exhibits diminished reliability when entering uncharted data realms typically called the out-of-distribution (OOD) regime. When the training data are iid or representative of the full population, extrapolation may work. However, for materials science this requirement is hardly fulfilled, i.e., the data selection is governed by subjective and technical issues, and often it is strongly biased and unbalanced. Still, materials scientists are searching for statistically exceptional situations, and important processes are often triggered by 'rare events' that are not or not well covered by the available data set, or smoothed out by the regularization. [e.g. Ref This all implies caution when applying ML. |
65e0f5d466c1381729e12e1a | 7 | Similar to any scientific theory or model, an AI model possesses a range of applicability, often inadequately defined. Consequently, there is an argument advocating the importance of AI interpretability, as it not only sheds light on the underlying mechanism but also provides some confidence in extrapolations. The contributions by Boley et al. , Ghiringhelli and Rossi (2.2), and Foppa and Scheffler (2.3) address these issues in more detail. |
65e0f5d466c1381729e12e1a | 8 | A special point in materials science is that data is typically not big. This implies that some ML methods are not suitable. In general, standard ML methods need to be used with caution and modification or new concepts have been and still need to be developed. Interestingly, Gaussian Process Regression and Random Forests are still often and helpfully used, but several new concepts were established in recent years, e.g., crystal-graph neural network, message passing and equivariance, subgroup discovery, and SISSO (sure independence screen and sparsifying operator). In particular the latter can deal with correlations between a big (even immense) number of elemental materials features (millions or trillions) and just a few data dozens data points of the property of interest. SISSO derives an analytical equation for describing the materials property and its statistical correlation with the relevant materials genes. The approach as well as recent advancements, implementations, and challenges are described by Yao et al. in contribution . |
65e0f5d466c1381729e12e1a | 9 | When data are scarce, the critical request is, that they must be highly accurate, precise, and well characterized. This is summarized by the request that experimental data must be 'clean', but it is not often achieved in materials science and rarely fulfilled in heterogeneous catalysis. The 'clean-data concept' for experimental studies is described in contribution (3.2) by Trunschke et al. Advancements in obtaining high-quality data from electronic-structure theory are described by Kokott et al. in . The general challenge to find the best-suited AI method for a certain application is severe, and the reproducibility of published AI studies is often problematic. The NOMAD concept is described in contribution . A strategy to overcome the bottleneck of scarce data in deep learning is the augmentation of a small, accurate data set by synthetically generated data. This is discussed by and exemplified by generating synthetic Hamilton Matrices for deep learning applied to multiphotoabsorption. Spatiotemporal models like random fields and Gaussian processes have demonstrated promising outcomes in integrating data from multiple sources and guiding scientific discovery in various disciplines. Contribution by discusses their application to materials science and hints at further directions to be explored to leverage their full potential in materials discovery. When trying to apply machine learning methods that have already proved successful in "hard matter physics" to soft matter, several technical obstacles need to be overcome, including the intrinsic multiscale nature of this part of condensed matter. Bereau and Kremer argue that when this can be achieved, it would usher soft matter in a new era, where poor scale separation can be efficiently addressed, and insight will be gained for phenomena that are currently too complex for traditional methods (contribution 3.7). In contribution , show that significant computational gains can be achieved in the numerical simulation of microstructure continuum mechanics models when traditional direct numerical simulation is replaced by modern deep-learning based methods when the AI models are informed by physical insight. Digitalizing the entire workflow in data-rich imaging techniques in material science from synthesis, sample preparation, data acquisition and post-processing in an integrated way is the topic of contribution (3.9) by Freysoldt et al. There, it is discussed that machine learning techniques can leverage the data science approach by removing the human inspection as the limiting factor to digest larger and larger amounts of data in order to discover relevant, but possibly rare patterns. Recently, large-language models (LLMs) have also entered the field of materials science. Raabe et al. provide an overview and perspective in contribution Section 4 then addresses several applications of data-centric materials science, typically paired with methodological developments. Experimental methods cover photoemission, electron microscopy, and atom-probe tomography. In contribution (4.1), Purcell et al. consider the role of AI in high-throughput materials discovery using computational workflows while Liebscher et al. as well as Schloz et al. discuss the roadmap to AI and ML driven data analytics in scanning transmission electron microscopy (STEM) in contributions (4.2) and (4.3), respectively. Atom probe tomography is another imaging-based technology to analyze the composition of materials at the near-atomic scale. Its enhancement using ML is the topic of contribution (4.4) by In contribution (4.5), Logsdail et al. investigate the potentials of a datadriven approach for heterogeneous catalysis. Finally, in contribution (4.6), Fratzl discusses recent advancements of x-ray scattering and diffraction for materials at the nanoscale with respect to the retrieval and analytics of large amounts of data. |
65e0f5d466c1381729e12e1a | 10 | Section 2: Data and Uncertainty 2.1: From Prediction to Action: Critical Role of Performance Estimation for Machine-Learning-Driven Materials Discovery Mario Boley 1 , Felix Luong 1 , Simon Teshuva 1 , Daniel F. Schmidt 1 , Lucas Foppa 2 and Matthias Scheffler 2 1 Monash University, Department of Data Science and AI 2 The NOMAD Laboratory at the Fritz Haber Institute of the Max-Planck-Gesellschaft and IRIS-Adlershof of the Humboldt-UniversitΓ€t zu Berlin |
65e0f5d466c1381729e12e1a | 11 | In recent years, the materials science community has established a large-scale infrastructure for data sharing that promises to increase the efficiency of the "data-driven" discovery of novel useful materials . Growing data collections are envisioned to lead to increasingly accurate statistical models for property prediction that can significantly reduce the number of necessary experiments or first principles computations and, thus, substantially improve the cost and time for critical discoveries . Indeed, the combination of public datasets and robust statistical estimation techniques like cross validation (CV) enables a collaborative improvement process ("common task framework" ). As a result, there are now models that can predict certain materials properties well on average with respect to the same distribution as the training data. Unfortunately, the in-distribution expected performance, as estimated by CV, is not directly coupled with the performance for the discovery of novel materials: expected performance fails to capture the model behavior for the very few exceptional materials that one aims to discover, and, fundamentally, in-distribution performance is irrelevant for a discovery process that is designed to generate high-performing materials more frequently than they occur in the initial training data. |
65e0f5d466c1381729e12e1a | 12 | Recognizing these issues, the community increasingly focusses on active learning approaches like Bayesian optimization for model-driven blackbox optimization (BBO). These methods manage an iterative modelling and data acquisition process and aim to optimize the cumulative "reward" received for the acquired data points over time, such as the maximum property value discovered so far. This process, illustrated in Figure , is enabled by an acquisition function that leverages the predictions of a statistical model together with its uncertainty quantification to effectively manage the underlying tradeoff of exploration (learning more about the candidate space) and exploitation (aim to sample high value candidates). This shift to consider actions instead of just predictions constitutes an important step towards accelerated materials discovery, but it reveals shortcomings not only in existing modelling approaches but more fundamentally in the methodological framework used to improve those models. In particular, the inapplicability of established performance estimation frameworks based on pre-generated data renders it extremely costly to conclusively compare and to systematically improve methods. where π is the modelled cumulative distribution function; and (iii) label for top-ranked material is acquired and added to data sample generating reward, e.g., defined as π
π‘ = πππ₯{π¦(π π ): -π < π β€ π‘} when maximizing a single property or figure of merit π¦, which incentivizes the discovery of materials with high π¦-value as early as possible in the process. While standard statistical analysis assumes the initial data points π -π+1 , β¦ , π 0 to be drawn with respect to some sampling distribution π· 0 , this distribution does not have to be balanced or representative of the whole population. However, any concentration away from a representative, i.e., uniform, sampling distribution, poses the risk of delayed reward generation, and a misspecified acquisition function or model, in particular one with over-confident predictions, even risks to never escape local maxima represented in the initial data collection. The sampling distribution of subsequent points π· 1 , π· 2 , β¦ , π· π vary and depend on the combination of model π and acquisition function π. Hence, they cannot be pre-generated for new methods rendering label generation a key bottleneck in method development. |
65e0f5d466c1381729e12e1a | 13 | To illustrate these challenges, let us consider as example the discovery of double perovskite oxides with high ab initio computed bulk modulus, where we use two popular statistical models, Gaussian process (GP) regression and random forest (RF), and two BBO data acquisition strategies, expected improvement (EI) of rewards and pure exploitation (XT). GPs are the traditional BBO model, because their Bayesian approach provides a principled quantification of "epistemic" uncertainty, i.e., uncertainty from a lack of training data related to a specific test point. However, they can struggle already with moderately high-dimensional representations such as the 24 features used in this example. In contrast, RFs are known to work robustly well with high-dimensional feature spaces , while their ensemble-based uncertainty quantification does not represent epistemic uncertainty. Interestingly, as shown in Figure , CV indicates that RF has the better in-distribution predictive performance not only in terms of squared error but also in terms of log loss, which takes uncertainty into account. Nevertheless, RF is outperformed by GP in terms of the produced discovery rewards, demonstrating that standard in-distribution performance estimation techniques can suggest sub-optimal methods. This demonstrates that already method selection is a real challenge for practical problems. However, the situation is much worse for methodological research that aims to not only determine, which of a small number of established methods works best, but to test dozens of combinations of models and acquisition functions. Absent innovation in performance estimation, comparing πΎ methods in terms of their expected discovery reward across πΏ repetitons of π rounds requires the acquisition of πΎπΏπ labels in addition to any pre-generated initial data. This is because, even when starting from a common initial training distribution, each method produces its own sequence of proposal distributions. Since these distributions are unknown a priori, there is no way to pre-generate data from them, blocking the usual collaborative improvement process around an initially released dataset. Thus, the prohibitive cost of expected reward estimation currently blocks substantial progress in addressing other important challenges like unsound uncertainty quantification or acquisition function optimization with infinite candidate populations particularly when using non-invertible materials representations. |
65e0f5d466c1381729e12e1a | 14 | Given these considerations, a central research goal should be to find reliable approaches for estimating a method's expected discovery reward based on existing data. A simple but infeasible state-of-the-art strategy is to run a method repeatedly using sub-samples of size π from the given dataset as initial data and the sub-sample complement as candidate pool, such that the ratio π/π is close to π/π where π is the overall population size. That is, one naively uses the initial dataset as proxy for the population. For at least two reasons, this simplistic approach is likely to produce misleading results (see Figure , middle left). Firstly, the real rewards are determined by the exceptional materials in the tail of the target property distribution, which are almost certainly not well represented in the available dataset. Secondly, changing the absolute sizes of initial data and candidate population misestimates model performance and, more severely, misrepresents the real overwhelming number of uninteresting materials that an efficient search must largely avoid. |
65e0f5d466c1381729e12e1a | 15 | Here, we present an adjusted reward estimation approach that provides random initial and candidate sets with realistic absolute numbers of unrepresented exceptional materials as well as distinct ordinary materials to distract from them. Let π , β¦ , π (π) denote the initial data elements in increasing order of their target property or figure of merit values. Based on an estimate πΌ Μ of the unrepresented fraction of top materials πΌ = #{π β Ξ©: π¦(π) β₯ π¦(π (π) )}/π create: |
65e0f5d466c1381729e12e1a | 16 | 1. an initial dataset by drawing a size-π bootstrap sub-sample , i.e., sample with replacement, from the low property value materials π (1) , β¦ , π (π-βπΌ Μπβ-1) and 2. a candidate set consisting of the held-out top βπΌ Μπβ materials and an up-sampled and stochastically perturbed set π Μ1, β¦ , π Μπ-βπΌ Μπβ from the unsampled elements of the bootstrap sample. As shown in Figure (bottom left), reward estimation with this approach performs much better than naΓ―ve estimation for our bulk modulus example. It accurately predicts GP with EI to produce the highest bulk modulus and highest cumulative reward out of the four candidate methods. As desired, this is based entirely on the initially available data without requiring the over thousand additional calculations that were needed to confirm this result. In-distribution performance is performance with respect to the initial sampling distribution π· 0 , out-of-distribution is with respect to the uniform mixture of the distributions π· 1 to π· 100 of the data points examined by the discovery process. While RF provides a better mean squared error, both in-and out-of-distribution, its out-of-distribution log loss is increasing with the size of the training, indicating a failure of its uncertainty quantification. |
65e0f5d466c1381729e12e1a | 17 | The lack of reliable approaches to estimate expected discovery rewards from a given dataset is a serious roadblock for the development of active learning methods for materials discovery. Without such estimators, the evaluation of each candidate method requires the acquisition of a potentially large number of labels in addition to any initially available data collection, preventing the usual collaborative process that led to fast-paced improvements of predictive model performance with fixed distributions. |
65e0f5d466c1381729e12e1a | 18 | NaΓ―ve reward estimation from the initial data typically fails because of unsuitable data proportions and underrepresented extreme events. We presented an adjusted approach that, by correcting for these factors, successfully assesses which combination of acquisition function and statistical model works best for the exemplary task of double perovskite bulk modulus optimization. This or similar approaches could become efficiently computable proxies for real method performances and thus enable fast communitydriven improvements to data-driven methods for materials discovery. |
65e0f5d466c1381729e12e1a | 19 | Artificial-intelligence (AI) and, in particular, machine-learning (ML) modelling is substantially increasing the reach and predictive power of material-science simulations. Such strategies are adopted for two broad classes of applications: a) surrogate modelling of materials properties, e.g., learning energies and forces of given atomic configurations, where the Hamiltonian is known but computationally intensive to evaluate (Refs. 1 and 2 and references therein), and b) materials genomics, i.e., the identification of the features that can explain and be used to model certain materials' property (the genes for that material and property), together with fitting of a predictive model for the given property as function of the identified genes (Refs. 3 and references therein). |
65e0f5d466c1381729e12e1a | 20 | Often, the performance of predictive models is focused on averages (e.g., the mean absolute error), and little attention is given to the distribution of errors (e.g, via the so-called violin plots) and to the inspection of the outliers, i.e., the data points that yield the largest prediction errors. Are these data points simply wrongly measured or could they herald some different physical mechanism that was not captured by the model trained to yield acceptable average errors? Scientifically, it is equally important for a ML model to yield predicted values for new data points and, concurrently, provide reliable uncertainty quantification (UQ). In other words, the model should be able to recognize if it can make a confident prediction solely from the input representation of a test data point, identifying whether it is similar to the data points used for training (interpolatory regime) or dissimilar (extrapolatory regime). The correct metric for assessing this similarity is, however, most often unknown and systematically finding it for a given ML model is one of the most difficult steps for a reliable uncertainty estimate. |
65e0f5d466c1381729e12e1a | 21 | Several strategies have been developed for UQ, spanning from rigorous and computationally extremely expensive Bayesian estimates to pragmatic ensemble-of-models training . However, many such estimates have been shown to be overconfident when test data are drawn far from the sampling distribution of the training data . This limitation represents a serious drawback for the overall reliability of ML models in atomistic simulations, where they promise to deliver first-principles quality results. |
65e0f5d466c1381729e12e1a | 22 | Besides the obvious intrinsic benefit of reliably quantifying the uncertainty of an ML model, these estimates are also a vital part of the so-called active-learning (AL) algorithms. AL denotes a strategy where the model constructs new (training) data points either in regions where a property of interest needs to be optimized (exploitation task) or in regions where the model uncertainty is large (exploration task), resulting in a more accurate model with a lower amount of training points. In material science, these algorithms are often desirable, because little initial information is known about a material or materials class and calculating labels (properties) is expensive. |
65e0f5d466c1381729e12e1a | 23 | In view of the exploitation task, it is desirable to adopt model classes that allow for a computationally inexpensive optimization (e.g., Gaussian processes). However, the biggest challenge in both surrogate modelling and materials genomics is the UQ in extrapolative regions for the exploration task. In practice, recognizing that a data point belongs to the extrapolation region is the actual conundrum. Statistics and information-theory modelling approaches rely on the fact that training data are representative of the overall population where predictions will be made. In both surrogate modelling and materials genomics applications, the unseen data may carry physical information that is not present in the model training. Electronic-structure data carries a further challenge due to its intrinsic aleatoric uncertainty stemming from numerical convergence and basis sets. It is often difficult, but necessary, to separate it from the model (epistemic) uncertainty, for defining whether training data refinement is needed or whether the model can be really improved. |
65e0f5d466c1381729e12e1a | 24 | As for any physical modeling, one does not expect a model to be predictive outside its physical scope. Yet, in the traditional development of physical theories (sometimes referred to as "model-based", as opposed to "data-centric", approach) describing the limit of validity of a theory is an essential part of it. Such limits of validity are typically expressed as inequalities as function of key parameters governing the physical property or process. We identify the data-centric identification of the limits of validity of an ML model as, arguably, the biggest challenge in AI applied to materials science. |
65e0f5d466c1381729e12e1a | 25 | The full acceptance of ML tools within the community, for both surrogate modeling and materials genomics, may depend on two interrelated aspects: The introduction of algorithms for a) reliable UQ, especially for data points that are outside the training distribution and b) finding explanations why any given outlier is an outlier. |
65e0f5d466c1381729e12e1a | 26 | For the first aspect, in the realm of surrogate model potentials, Bayesian-based frameworks offer an intrinsic definition of uncertainty, which can be judiciously used . For neural-network architectures, committee ensemble models can deliver some degree of uncertainty prediction. In both cases, correctly accounting for correlations in the training set data is essential for avoiding overconfident model predictions , but UQ can still be unreliable for out-of-sample data points. A promising alternative is the use of deep ensembles or variations thereof. Finally, because the surrogate model is trained to predict energy and forces, but these quantities are almost never the observable that is being sought in a simulation, advances in error definition and propagation through derived properties have been gaining much attention . |
65e0f5d466c1381729e12e1a | 27 | For the second aspect, a promising route is the use of subgroup discovery for the identification of the socalled domains of applicability (DAs, regions of the input space where a predictive model yields small errors) , which are given in form of descriptive rules, i.e., inequalities over a set of features, identified among a larger set of candidates. Although it has been shown that DAs can be found and the descriptive rules give insight on the analyzed ML models, the method has not been yet further developed to systematically identify outliers and exploited to improve the underlying ML model, e.g., in an AL fashion. |
65e0f5d466c1381729e12e1a | 28 | The recent literature has shown that, with carefully selected training data sets and physical expertise (domain knowledge), the resulting ML predictive models allow for important discoveries in materials science. However, unleashing the full potential of data-centric approaches and fulfilling their promise to deliver results of ab initio quality requires that the uncertainty of the predictions be quantified. This UQ needs to be robust and reliable, and the related algorithm should be relatively straightforward to implement, such that users have a transparent access to it. |
65e0f5d466c1381729e12e1a | 29 | Although reliability has to be prioritised, any UQ algorithm must not add a substantial computational cost to the ML model it is being applied to, since in materials modelling efficiency is often a core requirement to achieve meaningful simulations. This observation applies both to the realm of surrogate modelling where, e.g., millions of force evaluations with uncertainty quantification need to be carried out, and to the realm of materials genomics where, e.g., millions of candidate systems need to be classified including this quantification. Achieving such a framework requires the community to adopt more widespread standards and work together on benchmarking efforts targeted at error prediction. |
65e0f5d466c1381729e12e1a | 30 | Artificial-intelligence (AI) approaches in materials science usually attempt a description of all possible scenarios with a single, global model. However, the materials that are useful for a given application, which requires a special and high performance, are often statistically exceptional. For instance, one might be interested in identifying exceedingly hard materials, or materials with band gap within a narrow range of values. Global models of materials' properties and functions are designed to perform well in average for the majority of (uninteresting) compounds. Thus, AI might well overlook the useful materials. In contrast, subgroup discovery (SGD) identifies local descriptions of the materials space, accepting that a global model might be inaccurate or inappropriate to capture the useful materials subspace. Indeed, different mechanisms may govern the materials' performance across the immense materials space and SGD can focus on the mechanism(s) that result in exceptional performance. |
65e0f5d466c1381729e12e1a | 31 | The SGD analysis is based on a dataset π Μ, which contains a known set of materials. π Μ is part of a larger space of possible materials, the full, typically infinite population π. For the materials in π Μ, we know a target of interest π (metric or categorical), such as a materials' property, as well as many candidate descriptive parameters π possibly correlated with the underlying phenomena governing π (Fig. ). From this dataset, SGD generates propositions π about the descriptive parameters, e.g., inequalities constraining their values, and then identifies selectors π, conjunctions of π, that result in SGs that maximize a quality function π: In Eq. 1, the ratio π ππΊ π π β is called the coverage, where π ππΊ and π π Μ are the number of data points in the SG and in π Μ, respectively. The utility function π’(ππΊ, π Μ) measures how exceptional the SGs are compared to π Μ based on the distributions of π values in the SG and in π Μ. π establishes a tradeoff between the coverage (generality) and the utility (exceptionality), which can be tuned by a tradeoff parameter πΎ. Typically, the identified selectors only depend on few of the initially offered candidate descriptive parameters. The identified SG selectors (or rules) describe the local behaviour in the SG and they can be exploited for the identification of new materials in π. |
65e0f5d466c1381729e12e1a | 32 | The potential of SGD to uncover local patterns in materials science has been demonstrated by the identification of structure-property relationships, and by the discovery of materials for heterogeneous catalysis. Additionally, using (prediction) errors as target in SGD, we identified descriptions of the regions of the materials space in which (machine-learning) models have low or high errors. Thus, the domain of applicability (DoA) of the models could be established. Despite these encouraging results, the advancement of the SGD approach in materials science requires addressing key challenges: ο§ The quality function introduces one generality-exceptionality tradeoff, among a multitude of possible tradeoffs that can be relevant for a given application and that can be obtained with different πΎ. For instance, the required hardness of a material depends on the type of device in which it will be used and the DoA of a model depends on the accuracy that is acceptable to describe a certain property or phenomenon. However, choosing the appropriate πΎ and assessing the similarity -or redundancyamong the multiple rules obtained with different tradeoffs are challenging tasks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.