id
stringlengths
24
24
idx
int64
0
402
paragraph
stringlengths
106
17.2k
65e0f5d466c1381729e12e1a
33
Widely used utility functions assess the exceptionality of SGs by comparing the data distribution of the SG and that of 𝑃 ̃ via a single summary-statistics value. For example, the positive-mean-shift utility function for metric target favors the identification of SGs with high 𝑌 values only based on the means of the two distributions. Thus, it is often assumed that the distributions are well characterized by the chosen summary-statistics value and that 𝑃 ̃ is representative of the full population 𝑃. However, distributions in materials science are typically non-normal and 𝑃 ̃ might not reflect the infinitely larger, unknown 𝑃. This calls for the consideration of utility functions that circumvent the mentioned assumptions.
65e0f5d466c1381729e12e1a
34
The mechanisms governing materials can be highly intricate and the relevant descriptive parameters to describe a certain materials' property are often unknown. Thus, one would like to offer many possibly relevant candidate parameters and let the SGD analysis identify the key ones. However, optimizing the quality function is a combinatorial problem with respect to the number of descriptive parameters and efficient search algorithms are therefore crucial. Advances in Science and Technology to Meet Challenges In order to address some of these open questions, we approach the SGD as a multi-objective-optimization problem for the systematic identification of SG rules that correspond to a multitude of generalityexceptionality trade-offs. optimal SGD solutions with respect to the objectives coverage and utility function, as illustrated for the example of identification of perovskites with high bulk moduli in Fig. . Once the coherent collections of SG rules are identified, the overlap between SG elements can be used to assess their similarity. A high similarity between SG rules might indicate that the rules are redundant. Thus, the similarity analysis can be used to choose the SG rules that should be considered for further investigation or exploitation. , respectively), the electron affinity and ionization potential of isolated B species (𝐸𝐴 𝐵 and 𝐼𝑃 𝐵 , respectively), the expected oxidation state of A (𝑛 𝐴 ), the equilibrium lattice constant (𝑎 0 ), and the cohesive energy (𝐸 0 ).
65e0f5d466c1381729e12e1a
35
Noteworthy, the cumulative Jensen-Shannon divergence (𝐷 𝐽𝑆 ) between the distribution of bulk moduli in the SG and in the entire dataset is used as quality function in the example of Fig. . 𝐷 𝐽𝑆 assumes small values for similar distributions and increases as the distribution of target values in the SG is, e.g., shifted or narrower with respect to the distribution of the entire dataset. Crucially, 𝐷 𝐽𝑆 does not assume that one single summary-statistics value represents the distributions. Divergence-based utility functions addressing, e.g., high or low target values, will thus be an important advance. We note that the utility function might also incorporate information on multiple targets or physical constraints that are specific to the scientific question being addressed. However, in order to ensure that the training data is representative of the relevant materials space one would like to cover, the iterative incorporation of new data points and training of SGD rules in an active-learning fashion might be required.
65e0f5d466c1381729e12e1a
36
SGD can accelerate the identification of exceptional materials that may be overlooked by global AI models because it focuses on local descriptions. However, further developments are required in order to translate the SGD concept to the typical scenario of materials science, where datasets might be unbalanced, or not be representative of the whole materials space and the most important descriptive parameters are unknown. The multi-objective perspective introduced in this contribution provides an efficient framework for dealing with the compromise between generality and exceptionality in SGD. The combination of this strategy with efficient algorithms for SG search and with a systematic incorporation of new data points to better cover the materials space will further advance the AI-driven discovery of materials.
65e0f5d466c1381729e12e1a
37
Modern high-performance computing (HPC) systems are evolving towards greater heterogeneity and diversification. The heterogeneity is due to the use of specialized processing units for specific tasks, nowadays with a strong (commercial) focus on AI-specific algorithms. This strategy, led by companies like Nvidia with their (general-purpose) GPUs and tools like CUDA, is driven by the need to enhance computational performance while containing electrical-power consumption and total cost. Present-day exascale and pre-exascale systems commonly integrate GPUs with CPUs of different architectures and vendors. Additionally, alternative accelerators like tensor processing units (TPU), neural processing units (NPU), field programmable gate arrays (FPGA), and emerging technologies like neuromorphic and quantum processors add to the array of high-performance computing options. These will further contribute to the heterogeneity and diversification of high-performance computing but have not yet broken into scientific computing. Except for the quantum processor, the other accelerators adhere to classical architectures characterised by varying levels of parallelism.
65e0f5d466c1381729e12e1a
38
To tap the power of accelerators, AI codes must incorporate efficient internode communication schemes (like the well-established MPI) and align with programming models associated with the available accelerators. Examples include CUDA for Nvidia GPUs, ROCm for AMD GPUs, or SYCL/DPC++ for Intel GPUs. The neural network-based AI codes often rely on the availability and development of frameworks such as pyTorch and tensorflow, where the developers of these frameworks take the burden to adapt the framework to accelerators. For instance, pytorch provides versions of its framework with support for CUDA or ROCm backends. However, not all of the AI methods can be seamlessly translated into a neuralnetwork representation and not all applications are well suited for neural networks. Consequently, significant adaptation is required, leading to limited accelerator support. For example, the widely used decision-tree-based AI library, XGBoost, offers a CUDA version, but is still lacking a ROCm equivalent.
65e0f5d466c1381729e12e1a
39
The current challenge involves developing performance-portable and maintainable code for AI methods, in general, on HPC systems. This task will become even more challenging with the increasing heterogeneity of HPC systems. In a typical HPC system, internode communication is necessary and the message passing interface (MPI) has proven to be a flexible and effective solution for doing this. The socalled MPI+X paradigm combines MPI with intra-node parallelization models and/or accelerator offloading models (X). The choice of accelerator offloading model is largely determined by the specific accelerators in use, together with problem requirements and personal taste.
65e0f5d466c1381729e12e1a
40
We summarise various strategies that have been developed to address this challenge in Figure . For offloading work onto an accelerator, the most straight-forward approach would be to write the algorithms with accelerator-specific interfaces such as CUDA. While this in principle allows to tap all (performance) capabilities, these interfaces are limited to specific accelerators. Given the abundance of existing CUDA code in scientific computing, AMD and Intel have introduced tools to facilitate the translation of such code into their HIP/ROCm and SYCL/DPC++ language, which more or less resemble the semantics of CUDA. Moreover, the HIP and SYCL programming models even claim some universality by enabling code execution not only on AMD or Intel GPUs, respectively. However, the viability and broader adoption of these comparably recent approaches remains to be demonstrated.
65e0f5d466c1381729e12e1a
41
An alternative approach involves the utilisation of architecture-independent -and typically more abstract -programming models, which come in various forms. One category employs compiler directives to manage loop parallelization and data management. Examples include OpenMP and OpenACC . Programming with these directives aims at a single codebase compatible with different accelerators. Directive-based approaches can also facilitate the reuse of existing CPU-based code and enable an incremental code-porting workflow by successively "offloading" performance-critical parts of the code. Success-stories have been observed adapting these models .
65e0f5d466c1381729e12e1a
42
Another approach are C++ portability frameworks such as Kokkos and RAJA . They provide high-level parallel abstractions such as the parallel implementation of the traditional "for", "reduce", and "scan" operations, which the framework maps to specific hardware backends that use the corresponding platform-native programming models. These may, in addition, serve as forerunners for corresponding extensions to be added to the C++ standard.
65e0f5d466c1381729e12e1a
43
Figure Strategies to port codes onto diversified and heterogeneous high-performance computers. Translators convert from one programming model to another, directives are compiler instructions to dictate how a piece of code should be compiled, and parallel abstractions define how a computation workload may be calculated in parallel. The library will then map the abstractions to GPUs.
65e0f5d466c1381729e12e1a
44
As an example of how the code-portability challenge can be met for an originally developed AI application which is different from deep-learning, we outline the porting of an implementation of the sureindependence screening and sparsifying operator (SISSO) to GPUs using Kokkos. SISSO is a combination of symbolic regression and compressed sensing. It first generates a list of up to trillions of analytical expressions from an initial set of primary features and mathematical operators. It then uses an ℓ0 regularised least squares regression to find the best low-dimensional linear model from the generated expressions. In preparation for (pre-)exascale computing, we converted the most computationally intensive components of SISSO, i.e. expression generation and ℓ0 regularisation, in our initial MPI+OpenMP code to a MPI+OpenMP+Kokkos implementation, in order to demonstrate scalability and portability on exascale-ready HPC platforms.
65e0f5d466c1381729e12e1a
45
Throughout the development, we refactored the data structures to suit the access pattern of accelerators, and carefully optimised the memory migration between host and device. This results in an approximately tenfold speedup by the GPUs of two generations of Nvidia GPUs for a test problem with ~60 billion generated features and ~36 billion least squares regression problems (see Figure ). The code also scales to at least 64 nodes, see Figure . We expect that scaling to much higher node counts can be achieved with increasing the size of the training dataset. Notably, the same code also runs on AMD Instinct MI200 GPUs with a similar speedup without any code modifications, except for compilation settings. Given that the Kokkos framework supports backends for CUDA, HIP, SYCL, OpenMP, we expect our code can also be smoothly ported to other accelerators. Since Kokkos is developed and maintained with strong commitments by the US DOE laboratories, we expect it to receive continuous support and will extend to future HPC hardware.
65e0f5d466c1381729e12e1a
46
One key question when using an abstraction framework is how close its performance comes to the "native", i.e. architecture-specific programming models. In our case, we compared the performance of our batched least-squares-regression algorithm (for ℓ0 regularization) to a native CUDA implementation co-developed with Nvidia engineers. This new CUDA code is about twice as fast as the Kokkos version. However, it is worth noting that Kokkos' continuous development is promising. For instance, during our development, transitioning from Kokkos version 3 to version 4 resulted in a 10% speedup without requiring any code modifications from us in the application code.
65e0f5d466c1381729e12e1a
47
The growing diversity and heterogeneity in (pre-)exascale high-performance computing poses significant challenges to software developers, including performance portability and code maintainability. To tackle these issues, developers have adopted various strategies, such as code duplication (typically abstracted internally by some application-specific interfaces), (semi-)automatic code translation, directive-based portability models, and high-level abstraction frameworks. For our SISSO++ code, which is an AI application not readily amenable to the well-established (and portable) AI frameworks like, e.g., Tensorflow, we opted for the MPI+X paradigm which is well established in HPC, specifically using MPI+OpenMP+Kokkos. The usage of the Kokkos abstraction framework enhances both the code performance and portability, and it also helps reduce code-maintenance burdens. The Kokkos framework is also expected to pave the way for adopting the parallel abstract concepts in future C++ language standards. Due to the generality of the Kokkos framework, and already proven for SISSO++ by a seamless transition over two generations of Nvidia GPUs, we anticipate that our SISSO++ code will easily adapt also to future HPC architectures. Our porting strategy outlined here can serve as an example for other nonneural network based AI code development efforts.
65e0f5d466c1381729e12e1a
48
Materials design typically targets an application that requires the synthesis of a material which is characterised by measurable and reliable properties and functions that are maintained during its use. Inexpensive and abundant raw materials, reproducibility, and scalability are decisive factors for success. The relationships between the structure and the function of a material are usually complex and intricate and they prevent a strictly in-silico design for realistic conditions. Thus, the experimental input is crucial. Artificial-intelligence (AI) methods have the potential to reduce the significant efforts related to the synthesis and characterization of materials, accelerating materials discovery. However, rigorously conducted experiments that provide consistent training data for AI are indispensable. They directly determine the reliability of generated insights.
65e0f5d466c1381729e12e1a
49
The applications of AI in materials science are diverse. 1-2 For example, the optimization of synthesis and functional properties in high-throughput experiments requires mathematical models which are iteratively trained in order to ensure an efficient experimental design. The elucidation of materials structures can be facilitated by AI. Besides, new materials can be predicted via the identification of correlations and patterns in experimental and computational data sets. This leads to a variety of data set structures. The interdisciplinary nature of materials science and the multitude of experimental techniques applied produce a broad spectrum of data formats, all of which can ultimately be traced back to spectroscopic, thermodynamic or kinetic relationships and are already standardised to some extent. Experimental data in materials research are usually not "big data", which places additional demands on the methods of data analysis.
65e0f5d466c1381729e12e1a
50
However, if the data becomes FAIR, 3 i.e., Findable, Accessible, Interoperable, and Reusable and open, i.e., generally accessible after publication, machines can systematically analyse this information beyond the boundaries of a single laboratory and field of research, learn from it and develop disruptive solutions. 4 A particularly sustainable generation of insight is achieved through the use of interpretable AI algorithms that uncover descriptors, i.e., correlations between key physical parameters and the material properties and functions.
65e0f5d466c1381729e12e1a
51
Predictions could be more reliable if the materials function of interest was determined exclusively by the bulk properties of the material. However, when the material's function is affected, or even governed by interfacial and kinetic phenomena, such as in case of batteries, sensors, biophysical applications or catalysis, the relationships between the materials parameters and the function become extremely complex. On the one hand, this is caused by the strong influence of defects and minor impurities. On the other hand, the material properties respond to the fluctuating chemical potential of the environment in which they are used. This gives metadata such as the sequence of experimental steps and the time frame a particular importance. 5 In order to make experimental data useful for a digital analysis, the measurements have to follow socalled "standard operating procedures" (SOPs), as is already common practice in some research areas. An important cornerstone for such workflows is the introduction of certified standards that enables the direct evaluation of measured data when they are published together with the results of the standard. Awareness of the need for rigorous work and standardization of experiments has grown considerably in academic research in recent times and it is reflected in initiatives for standardized measurement procedures and test protocols (Reference 4 and references cited therein).
65e0f5d466c1381729e12e1a
52
The currently most common form of publication in scientific journals does not support the direct electronic access to the data. The use of natural language processing (NLP) tools is one approach to analyse and understand human language in published articles. 6 These computer science techniques can help to identify trends, but do not provide consistent data sets, as data in publications are not presented uniformly, for example often only in the form of graphical representations, and data as well as metadata are not necessarily provided completely.
65e0f5d466c1381729e12e1a
53
The most effective solution to enable the use of experimental data in AI is to apply machine-readable SOPs in automated experiments. In this way, standardized and complete data and metadata sets can be generated that can be shared after publication in repositories, as is already widely done in computational materials science and synthesis of complex molecules. The latter also requires the development of ontologies. We note that digital SOPs are an important preliminary step for enabling autonomous research by robots in the future. 7
65e0f5d466c1381729e12e1a
54
The most common AI methods require large amounts of data, and only the smallest part of available data in materials science meets the quality requirements for data-efficient AI. In a use case study, 8 we have shown how a "clean" data-centric approach in interfacial catalysis enables the identification of descriptors based on a data set that can be generated in the experimental practice with reasonable effort (Figure ). Here the term "clean data" refers to the fact that the considered materials were carefully synthesized, characterized, and tested in catalysis according to SOPs reported in an experimental handbook. 5 Large-scale applications in the field of energy storage such as water splitting and the efficient use of resources in the production of consumer goods are generally based on highly complex catalysed reactions at interfaces. The selective oxidation of the short-chain alkanes ethane, propane and n-butane to valuable olefins and oxygenates was chosen as an example of a reaction type that is known for its complicated reaction networks. Control over the selective formation of desired products in this network and the minimization of CO2 formation requires sophisticated catalyst materials and adapted reaction conditions.
65e0f5d466c1381729e12e1a
55
Experimental procedures that capture the kinetics of the formation of the active phase from the catalyst precursors have been designed and specified in a SOP. 5 A typical set of twelve chemically and structurally diverse catalyst materials was included in the study that combines rigorously conducted clean experiments in catalyst synthesis, physicochemical characterization and kinetic evaluation with interpretable AI using the sure-independence-screening-and-sparsifying-operator (SISSO) symbolicregression approach. 9 Previously obtained empirical findings are correctly reflected by the data analysis, which proves the value of the data set.
65e0f5d466c1381729e12e1a
56
Interpretable AI goes far beyond empirical interpretations. It addresses the full complexity of the dynamically changing material and the full catalytic process by identifying non-linear property-function relationships described by mathematical equations in which the target catalytic parameters depend on several key physicochemical parameters of the materials measured in operando and after different stages in the life cycle of the catalyst. These key descriptive parameters, that the AI approach identifies out of many offered ones, reflect the processes triggering, favouring or hindering the catalytic performance. In analogy to genes in biology, these parameters are called "materials genes" of heterogeneous catalysis, since they describe the catalyst function similarly as genes in biology relate, for instance, to the color of the eyes or to health issues. Thus, these materials genes capture complex relationships. They describe a correlation (with uncertainties) but they do not provide the detailed description of the underlying processes.
65e0f5d466c1381729e12e1a
57
Reproducibility is probably the most basic and crucial requirement of materials science. AI is an efficient tool in materials research and development, but its application requires that we change the way we work and deal with data. Complete, uniform and reliable data sets are required that comply with the FAIR principles. These can be obtained by working across laboratories according to standard operating procedures ("handbooks"), which also include the analysis of benchmarks. Important elements for the gradual development of autonomous materials research, 10 in addition to technical progress in robotics, are the use of machine-readable handbooks, automated experiments with standardized data analysis and upload to local data infrastructures as well as the standardized publication of experimental data in overarching open repositories.
65e0f5d466c1381729e12e1a
58
Sebastian Kokott 1,2 , Andreas Marek 3 , Florian Merz 4 , Petr Karpov 3 , Christian Carbogno 1 , Mariana Rossi 5 , Markus Rampp 3 , Volker Blum 6 and Matthias Scheffler 1 1 The NOMAD Laboratory at the FHI of the Max-Planck-Gesellschaft and IRIS-Adlershof of the Humboldt-Universität zu Berlin, Berlin, Germany 2 Molecular Simulations from First Principles e.V., Berlin, Germany 3 Max Planck Computing and Data Facility, Garching, Germany 4 Lenovo HPC Innovation Center, Stuttgart, Germany 5 MPI for the Structure and Dynamics of Matter, Hamburg, Germany 6 Thomas Lord Department of Mechanical Engineering and Materials Science, Duke University, Durham, USA
65e0f5d466c1381729e12e1a
59
The quality of input data is critical for data driven science. Detailed, high-level (i..e, quantum many-body theory based) simulations, although expensive, can provide immensely valuable data on which other methods can build, if three main issues can be addressed: First, the system size of accurate quantummechanical simulations is often restricted by the computational complexity of the underlying simulation algorithms. Second, the accuracy of the predicted data for new complex materials critically depends on the accuracy of the specific physical model chosen to derive quantum-mechanical simulation data, limiting subsequent data-driven models. Third, the number of atomic-scale configurations that must be covered for a statistically sound description grows dramatically with the complexity of a material, necessitating more and faster high-level calculations to provide input data for subsequent, AI-driven research. Simulations of real-world materials require addressing all three points at the same time.
65e0f5d466c1381729e12e1a
60
Hybrid density functionals (hybrids) have emerged as a practical reference method for ab initio electronicstructure-based simulations because they resolve several known accuracy issues of lower levels of density-functional theory (DFT) while offering affordable computational cost on current highperformance computers. There are two main computational bottlenecks for atomistic simulations using hybrid DFT: evaluating the non-local exact exchange contribution and finding the solution of a generalized eigenvalue problem (matrix diagonalization). Here, we discuss advances and perspectives for both challenges as recently implemented in the all-electron code FHI-aims .
65e0f5d466c1381729e12e1a
61
The current reach of these methods is documented by run times and scaling of hybrid DFT simulations for several challenging materials, including hybrid organic/inorganic perovskites and organic crystals, with up to 30,000 atoms (50,000 electron pairs) in the simulation cell. Despite such large systems sizes, the simulations can be run with moderate computational resources. per self-consistent field iteration. The HSE06 hybrid functional was used for all simulations. The following systems were simulated (from left to right): phenylethylammonium lead iodide (PEPI) with a defect complex , a 4x4x4 paracetamol supercell, a 15,288-atoms Ice XI supercell (inlcuding a force evaluation), and a 30,576-atoms Ice XI supercell. All calculations were carried out on the Raven HPC cluster at the MPCDF using Intel Xeon IceLake (Platinum 8360Y) nodes with 72 cores per node.
65e0f5d466c1381729e12e1a
62
A resolution-of-identity-based real-space implementation of the exact exchange algorithm was optimized to allow for much improved exploitation of sparsity and load balancing across ten thousands of parallel computational tasks. Results show drastically improved memory and runtime performance, scalability, and workload distribution on CPU clusters. The improvements pushed the simulation limits beyond 10,000 atoms, compared to an earlier implementation that reached system sizes around 1,000 heavy atoms . This new code implementation can perform computation of energy, forces, and stress for periodic and non-periodic systems for several fashions of hybrid density functionals. In addition, for materials including heavy elements, perturbative spin-orbit coupling can be combined with the hybrids . Due to the inherent O(N 3 ) scaling, the solution of the eigenvalue problem beyond 10,000 atoms becomes the bottleneck during the simulations.
65e0f5d466c1381729e12e1a
63
The direct eigensolver library ELPA has long offered unrivalled performance for parallel matrix diagonalizations. Extensive profiling, fine tuning and work on portability was carried out to adapt ELPA to the most current HPC architectures, further reducing the time for the diagonalization bottleneck for simulation sizes up to many thousands of atoms. Key to future success of ELPA is exploiting full capabilities of GPU-accelerated high-performance clusters. ELPA already has a well-established support for NVIDIA GPUs . Recently, we ported ELPA to AMD GPUs, enabling the solution of a problem with a matrix size with leading dimension of more than 3 million on 1024 AMD-GPU nodes of the LUMI pre-exascale system at CSC in Finland.
65e0f5d466c1381729e12e1a
64
Although the library APIs for AMD and NVIDIA are very similar, we find very different run-time and performance behaviour for the ELPA code. Thus, a new abstraction layer driving the GPU computations within ELPA has been implemented. Below this abstraction layer, the vendor specific implementations coexist and can be independently developed and optimized. We believe that this very flexible approach facilitates the integration of upcoming new architectures, e.g., Intel GPUs.
65e0f5d466c1381729e12e1a
65
Similar GPU strategies will be needed for the exact exchange algorithm, but are not yet exploited, as the porting of CPU code to GPU architecture is not at all straightforward. In the CPU implementation, the inherent sparsity of real-space approach keeps the size of matrices used for dense matrix-matrix operations moderate. Thus, with the current algorithm the full capabilities of GPUs cannot be used, and speedups would be limited by communication. An overhaul of the algorithm, and GPU-specific storage and communication patterns will be needed to make it amenable for heterogeneous, GPU-accelerated architectures.
65e0f5d466c1381729e12e1a
66
The achievements for hybrid DFT simulations demonstrated above is a big success and paves the way to efficient use of exa-scale resources in the future. Still, the accuracy of hybrids is limited by construction. The required fraction of exact exchange is an open point. A related question is the treatment of the electron correlation -hybrid density functionals addressing this point only insufficiently. Approaches using range-dependent parameters for the fraction of exact exchange or double hybrids are a way forward to improve the accuracy of the ab initio model. The GW approach and the CCSD method provide much more accurate access to electronic structure quantities per se, but the complexity of these methods will limit their application to smaller system sizes for the foreseeable future.
65e0f5d466c1381729e12e1a
67
From a technological point of view, we think that sufficiently large memory per node and task will be needed for any enhanced electronic structure method, i.e., usually non-local operators are evaluated, which require finding a good balance between communication across nodes and tasks and storing data. Here, the tighter integration of accelerators within the HPC node, as, for example, expected for the upcoming Nvidia (Grace-Hopper) and AMD (MI300) technologies, looks very promising. There are two main hurdles for scientific software developers: library APIs for solving mathematical and physical problems are partially vendor-specific and/or not performance optimal. Addressing both points increase the reach of scientific code (and in turn reduces the need for code duplication) and will reduce the overall cost of research significantly. As a difficult task remains the optimization of communication patterns between CPUs and GPUs for specific architectures. Also new workload distribution models might be needed to better use all available resources, e.g., compute with GPUs and CPUs at the same time (right now often CPUs are idling while GPUs do the work).
65e0f5d466c1381729e12e1a
68
The new exact exchange algorithm implemented in FHI-aims and the highly optimized ELPA library enables simulations of large system sizes at moderate runtimes. On the one hand, these implementations allow one to increase statistical sampling to address the huge configuration space that comes with the large system sizes. On the other hand, the accuracy of hybrid DFT simulations is sufficient for many applications. We believe that with the aid of future exa-scale resources in combination with sophisticated data-driven models, hybrid functionals will be established as default method for DFT simulations of materials. In general, exploiting sparsity is key to low-scaling electronic structure methods for large scale simulations. Real-space algorithms using localized wavefunctions are especially well suited. Nevertheless, the data distribution and communication pattern may need architecture-specific optimizations that complicates software design and code maintenance.
65e0f5d466c1381729e12e1a
69
When, at the end of 2014, the NOMAD Repository & Archive went online, it was the first data infrastructure in computational materials science that fulfilled what was later and independently defined by the acronym FAIR (Findable, Accessible, Interoperable, and Reusable). This definition and the request that scientific data should be FAIR was introduced in a very general scientific-data context by Wilkinson et al. in 2016 . As of today, the NOMAD Repository stores input and output files from more than 50 different atomistic (ab initio and molecular mechanics) codes and totals more than 13 million entries, uploaded by over 500 international authors from their local storage, or from other public databases. The NOMAD Archive stores the same information, but converted, normalized, and characterized by means of a metadata schema, the NOMAD Metainfo , which allows for the labeling of most of the raw data in a code-independent representation. One of the benefits of normalized data is that they are accessible in a format that makes them suitable for direct artificial-intelligence (AI) analysis. NOMAD also offers the AI toolkit , a JupyterHub-based platform for running notebooks on NOMAD servers, without the need for any registration or downloaded software. The data-science community has introduced several platforms for performing AI-based analysis of scientific data, typically by providing rich libraries for AI. General-purpose frameworks such as Binder and Google Colab , as well as materialsscience dedicated frameworks such as pyIron , AiidaLab , and MatBench are the most used by the community. In all these cases, a big effort is devoted to education via online and in-person tutorials. The main specificity of the NOMAD AI toolkit is its connection with the extensive NOMAD Archive. Moreover, together with the NOMAD Oasis , users can work with their private as well as community data within the same software platform and using the same API.
65e0f5d466c1381729e12e1a
70
Besides providing the framework for performing custom-made AI analysis, the NOMAD AI toolkit provides a set of tutorial notebooks introducing users step by step into both the most popular and widely known AI methodologies, with showcase applications in materials science, and into the more advanced ones, i.e., methodologies that have been published in the latest years. Due to the very nature of the Jupyter technology, these tutorial notebooks are interactive, in the sense that users can modify lines of codes and check the effect of the modifications. Also, the tutorial notebooks have direct access to the whole NOMAD data, so that users can apply the learned techniques to new data, including data uploaded by them. Importantly, the AI toolkit includes notebooks that present actual AI software as used for producing results for peer-reviewed publications. This feature suggests that scientific reproducibility can reach its full potential, at least for AI analysis tools. For instance, users can re-train AI models with exactly the same set of hyperparameters as used in the original publications, on exactly the same data, including the train/validation/test set splits. A piece of information that nowadays is not required in peer-reviewed publications. However, such addition would be scientifically appropriate as it would directly enable the reproducibility of reported results. The NOMAD AI toolkit enables this important step. As already noted by the proponents of the FAIR principles for scientific software , providing complete information on the algorithms and software used to analyze data is all but trivial. This is particularly challenging if one wants to provide live software that can be run on demand, mainly because pieces of software, e.g Python scripts for an AI analysis, require a virtual environment where libraries that are used for efficiently performing certain routine tasks are installed. These libraries get repeatedly updated, and unfortunately backward compatibility is not necessarily ensured. This means that the same set of commands that at release time allows to install and run a software, at a later point in time may not yield a correct installation. Besides, in the case that the software is run in a container (as for the NOMAD AI toolkit), when a new container is created the software for the container platform gets updated. In other words, special care and planning has to be devoted to maintaining the whole ecosystem of software so that exactly the same datasets yield in time exactly the same AI models and therefore exactly the same predictions over the same test data.
65e0f5d466c1381729e12e1a
71
Platforms like the NOMAD AI toolkit also foreshadow the scientific-reproducibility utopia. Much of the technology that allows for reaching these goals still needs to be developed, but some important steps have been taken already. First of all, Jupyter notebooks can be uploaded to NOMAD as easily as the data. The upload timestamps and other provenance metadata that allow for the unique identification of each analysis script. Furthermore, users are encouraged to provide a rich set of metadata that are made searchable and therefore will allow other users to locate the notebooks by e.g., model class for the AI analysis, or by used libraries, including their versions. In its current state, the NOMAD AI toolkit allows for the findability and interoperability of the AI-analysis software. In fact, a unique container is currently used for all the notebooks, thus allowing for a full interoperability among the different AI analysis tools. The complexity of the maintenance of such an environment rapidly increases with the number of uploaded notebooks which poses challenges to ensure that stored notebooks can run over the years and produce the same results. However, each set of obtained results, including all the intermediate results along the analysis workflow, can be stored (according to FAIR principles) and automatic tests could be run to check the conformity of the results produced by the re-trained model with the reference ones. Knowing that some piece of code is at some point in time unable to reproduce old results is the necessary condition to try and fix the code in order to conform with the reference results. This solution, which requires quite some human effort, introduces a possibly interesting generalization on the idea of reproducibility, which in some sense is a black-box requirement. I.e, in each step of the analysis, the same input needs to yield the same output, but the details inside the black box are allowed to change. A radically alternative route is to partly renounce to a full interoperability among the notebooks and maintain several different containers within the NOMAD AI toolkit. Such an approach would allow for the creation of specific containers that are not updated, thus allowing for the software installed therein to be always executable. Although the tools used in these not-updated containers cannot always be combined with software installed into other containers , it can still be deployed on new data that have been uploaded at a later time.
65e0f5d466c1381729e12e1a
72
The introduction and gradual implementation of the FAIR practices for scientific-data management and stewardship revealed that another crucial component of scientific research needs to adopt the FAIR concepts: The scientific software for data production and analysis. As for data, the key point is the reproducibility of research finding, i.e., the practical possibility to re-obtain the same results starting from the same hypotheses (the input settings) and methods.
65e0f5d466c1381729e12e1a
73
Clearly, providing only the input data and results in a data archive, even if fully FAIR-data compliant, is not enough for reproducibility, if part of the results are obtained in an incompletely documented way and/or via some custom-tailored analysis software, which is not properly stored and versioned. The NOMAD AI toolkit already enables re-run AI software on FAIR data for a relatively small set of Jupyter notebooks at the price of human-intensive maintenance. The grand-challenge is to develop a strategy to scale up such maintenance in a (semi-)automatic fashion, so that all AI tools from the community can be preserved according to FAIR practices, fully achieving scientific reproducibility. Clearly, these reproducibility concepts and the use of Jupyter notebooks also imply that newcomers to AI can use the software that already exists at the NOMAD infrastructure, train themselves and adjust and advance the analysis tools towards their own but different applications.
65e0f5d466c1381729e12e1a
74
Training of Deep Learning (DL) models requires a large amount of data in the first place and the data set must be sufficiently diverse for the network to be transferable such that it produces unbiased predictions. At the same time, the data size needs to be balanced to compensate for the cost of their generation. Strategies to deal with scarce data problems include Transfer Learning (TL), Self-Supervised Learning, Generative Adversarial Networks (GANs), Model Architecture, Physics-Informed Neural Network, and Deep Synthetic Minority Oversampling techniques to name a few recent approaches, as pointed out in .
65e0f5d466c1381729e12e1a
75
Here, we focus on a specific route to overcome the scarce training data bottleneck, namely the generation of random synthetic training data under suitable constraints determined by the physics involved . In our approach we aim at modelling system dynamics by encoding it into a Hamilton matrix for the interaction of (bound) electrons with intense laser light. The latter can be very noisy and fluctuating from shot to shot, as produced by X-ray Free Electron Lasers (XFEL). We vary the elements of the Hamilton matrix randomly about a matrix of an existing model system in one physical dimension (1D), creating synthetic Hamilton matrices (SHMs) for systems which could but do not necessarily exist in nature for which calculations can be done quickly compared to real 3D systems. From the large set of SHMs augmented with different deterministic realizations of noisy laser pulses, we compute photoelectron spectra to train a fully-connected deep neural network (DNN). Figure shows an application of the DNN (trained with spectra from SHMs) to a real 3D system for which it predicts, without knowing the system explicitly, how the noisy spectrum would look like if a "clean" (Gaussian) laser pulse would have been used. The good agreement with the ground truth demonstrates that the trained DNN can be transferred from 1D to 3D problems and gives confidence in our SHM deep learning concept (SHM-DL). Very recently, the idea of synthetic data generation based on existing data has been taken up for composite materials , where a limited number of original full-field micro-mechanical simulation data are randomly rotated in physical 3D space to generate additional data to train a recurrent neural network for the non-linear elasto-plastic response of Short Fiber Reinforced Composites. Similar ideas using TL are being explored in other areas .
65e0f5d466c1381729e12e1a
76
An important problem in the context of spectra generated by XFEL double pulses is the delay between the pulses which jitters in an unknown way from shot to shot. The SHM-DL approach can extract the timedelay of a double-pulse from the spectrum it has generated. Importantly, we can sort single-shot noisy spectra according to the time-delay of the double pulse with which the spectra were generated. With a second network the time-delay sorted pulses, binned over small delay intervals (1fs) can be purified as shown in Figure . This constitutes a substantial generalization to predict a hidden parameter (the timedelay of the pulses) . The task the SHM-DL has successfully completed so far, is the mapping of spectra generated by noisy pulses to spectra generated by Gaussian (Fourier limited) pulses. Can we also predict via SHM-DL maps spectra for other clean pulse forms, e.g., for pulse forms which are not even realizable experimentally? This would be very interesting for systems whose response to light cannot be computed (too complicated) but measured, e.g., with noisy pulses as described, since with SHM-DL we do not need to compute the "true" spectrum of the desired system.
65e0f5d466c1381729e12e1a
77
The primary vision of the SHM-DL approach is a 21 st century spectroscopy. Applied to molecular rovibrational spectra, e.g., it could replace the traditional normal mode model for the assignment and classification of spectral, leaving it to the trained network to associate appropriate SHMs with the spectrum, thereby classifying it by means much more flexible than traditional, structurally predefined normal modes.
65e0f5d466c1381729e12e1a
78
The long-term goal is to develop SHM-DL to become capable of identifying a single SHM (or a small group of SHMs) which describe the system so well, i.e. represent the system, such that from the reconstructed SHM(s) time-dependent system evolution in general and other observables can be computed/predicted. This would constitute a physics-rooted form of generalization which delivers at the same time physical insight as it provides an optimal parametrization of a physical system with a Hamilton matrix of chosen size. First attempts are promising that identify SHMs in relation to two-or multidimensional spectroscopy .
65e0f5d466c1381729e12e1a
79
Technically, even the SHM-DL approach remains a challenge regarding the computing power needed to numerically produce the spectra (training data) from the SHMs. Hence, (i) a reduction of the required training data size by better knowledge of the underlying physics is desirable as well as (ii) a reduction of computational costs by ultra-efficient quantum propagation in time to obtain the spectra . (iii) Furthermore, the computed spectra as training data must be balanced. For the time being this is done by simply discarding spectra from the training set which are too close to each other. However, this implies a large waste of computing time. To reduce this waste several advances are desirable: Firstly, use of an optimal metric to determine the Euclidean "distance" between two spectra. Here, recently the Wasserstein metric has become popular , or approximations to it which are computationally cheaper. A more elegant, physics-oriented advance would be to find an approximate inversion of the SHM-tospectrum map, or any other way which allows us to shift the balancing of the spectra to suitable choices of the SHMs.
65e0f5d466c1381729e12e1a
80
Thinking ahead, the idea of SHMs could be realized not with DNNs but other DL approaches. Most promising are GANs or variants thereof, where the relevant SHM is constructed by the GAN from a random one successively with computationally costs eventually reduced compared to the present SHM-DL sampling approach. Moreover, the GAN approach would directly predict an SHM which describes the system's coupling to light.
65e0f5d466c1381729e12e1a
81
We have introduced the idea of synthetic Hamilton matrices (SHMs), random representations for the dynamics of systems coupled to light, which could exist but not necessarily exist in nature. This approach enables sampling of the training space solving the dynamics with SHMs efficiently incorporating sufficiently generic features to be transferable to real systems. They serve the purpose to augment training data for DL with DNNs. We demonstrated that this SHM-DL approach works by purifying photoelectron spectra from noisy pulses and identifying pulse delays which jitter in an unknown manner, as supplied by X-ray Free-electron Lasers. The approach is physics-oriented and therefore promises physics insight beyond the prediction of spectra through the DL based identification of the relevant Hamilton matrix from a spectrum for a system, unknown to the DNN.
65e0f5d466c1381729e12e1a
82
Understanding the structure-property relations of materials, and optimizing chemical synthesis or device manufacturing processes requires integrating multimodal datasets from both theory and experiments that often encompass spatial and temporal dependence. The explicit spatiotemporal characteristics may be exploited in model-building for data integration. Historically, spatial and spatiotemporal models were largely developed in the contexts of geoinformatics, biostatistics, and quantitative ecology, many of which remain underappreciated by the materials science community. In these models, the temporal and spatial subsystems are typically considered in 1D and 2D/3D, respectively. Spatiotemporal models describe their subsystems jointly to capture the interactions through covariance functions or dynamical processes derived from physical knowledge . They are structured and interpretable and are considerably more tractable than first-principles methods.
65e0f5d466c1381729e12e1a
83
Random fields (RFs) and Gaussian processes (GPs), also known as kriging, are two established categories of models designed for spatial and spatiotemporal data. RFs already have established use in the statistical modeling of microstructured materials , while GPs are invented in mining engineering and they are a classic example of surrogate models. We discuss here three diverse examples from materials data science that indicate their broad applicability and utility. (i) In metal additive manufacturing, Saunders et al. combined three GPs with distinct characteristics to model the pairwise relationships between materials microstructure, melt pool geometry, and mechanical property obtained from multiphysics simulation, all of which are also time-dependent. (ii) In photoemission spectroscopy, Xian and Stimper et al. constructed a Markov RF with nearest-neighbor interaction and transformed the band dispersion reconstruction problem into a classification problem. The coordinates in their problem are the two momenta and energy of photoelectrons, the use of pre-computed energy bands from electronic structure theory provides an effective initialization. (iii) In combinatorial materials screening, Kusne et al. constructed a GP in the chemical compositional space to guide the search for the optimal stoichiometry within a family of tertiary phase-change materials. Their algorithm was integrated into a synchrotron beamline and may be run in a closed loop driven by active learning.
65e0f5d466c1381729e12e1a
84
One defining characteristic of materials science data is its abundance of data types, from videos to images to atomic structures . Comparatively, spatiotemporal models, besides the classic examples like RFs and GPs, may also take the form of point processes , state space models , and diffusion processes , which are as yet not used for data integration, but have their respective benefits to representing specific data types. Besides coordinates with concrete physical meaning, one could also consider direct spatial or dynamical models of the latent space in data integration, as it is often more robust to noise and dimensional scaling artifacts, especially for multiple data modalities. This leads to the question of problem mapping from data type to model category and subsequent model specification as the primary challenges.
65e0f5d466c1381729e12e1a
85
The three examples in the previous subsection illustrate that building spatial and spatiotemporal models are not limited to the physical dimensions attached to their original meaning. The straightforward way for problem mapping is to first identify the data types related to a particular problem, then consider the native data types to the model and find the match. For example, point process models would be suitable for modeling the transport of point defects because of their sparse distribution.
65e0f5d466c1381729e12e1a
86
Secondly, we should pay special attention to the data quality in the subsystems to be integrated, including resolution, unit size (such as pixel size for image data), missingness, structuredness, and fidelity (such as noise level). Many of these problems are not yet formally addressed, thereby motivating further research on a case-by-case basis guided by domain knowledge. For example, data resolution and fidelity affect the choice of integration ordering, i.e., from high to low or in reverse. For experimental data, the unit size is usually not the same as the resolution because of blurring introduced by the instrument response. Thirdly, we should consider the scalability of the model during development, which may be left unnoticed until Illustrations of spatial models of (left) photoemission data in the energy-momentum coordinates using a Markov random field and (right) combinatorial material screening data in the chemical compositional space using a Gaussian process .
65e0f5d466c1381729e12e1a
87
The two main paradigms in materials science that benefit from advancements in spatiotemporal models are: (i) Self-driving (or autonomous) laboratories . They deploy robots and machine learning-driven sequential decision-making from streaming data to search through high-dimensional parameter spaces (such as process, composition, and property parameters) for materials optimization. A growing number of them are installed at large-scale research facilities such as X-ray or neutron sources or in regular research institutions for organic and inorganic synthesis. (ii) Combined large-scale atomistic simulation and video-mode recording of time-resolved experiments . Here both the simulation and the data analysis may be powered by machine learning algorithms, while data integration between the two modalities through a spatiotemporal model is needed to obtain experimentally validated physical parameters. Both of these two paradigms will benefit from the following developments:
65e0f5d466c1381729e12e1a
88
From the model development side, accurately accounting for long-range dependence (LRD) in both spatial and temporal dimensions is one of the crucial yet unmet challenges. LRD manifests in the slowly decaying dependence structure, such that the Markov assumption is no longer a valid approximation. Current approaches using deep-state space models are limited to video frame classification and generation, further improvements on both spatial and temporal LRD, computational efficiency, and the accommodation of graph-structured data will be fitting for the demands in materials data integration.
65e0f5d466c1381729e12e1a
89
From the data engineering side, the data integration process often involves the comparison of metadata from two or more sets of measurements or calculations, which require that the data formats are interoperable. Systematic documentation of metadata is crucial for successful data integration projects, which now lie at the center stage of the FAIR principle . For materials optimization platforms that depend on streaming data, the development of automated (meta)data logging systems that include anomaly and distribution shift detection is essential for the quality control of data acquisition. It will also pave the way for efficient data integration and enable online search and process optimization.
65e0f5d466c1381729e12e1a
90
Spatiotemporal models have demonstrated promising outcomes in integrating data from multiple sources and guiding scientific discovery. The future of spatiotemporal models for materials data science should explore the interplay between the domain knowledge used in problem mapping and model specification to ensure a faithful representation of the problem context to achieve the desired interpretability and performance.
65e0f5d466c1381729e12e1a
91
Soft matter is a sub-class of condensed matter that comprises systems with a characteristic energy on par with thermal energy at room temperature, 𝑘 B 𝑇 room (about 2.5 ⋅ 10 -2 eV at T=300K). The low energy gives rise to significant conformational (intra-molecular) flexibility, leading to the spontaneous self-assembly of supramolecular mesoscopic structures. Relevant systems include polymers, colloids, and complex fluids, for which soft-matter physics have provided a foundational understanding . Soft matter offers a slew of modern-day applications, e.g., food products, rubbers for automotives, electronics or medical applications. This makes the discipline both scientifically and technologically highly relevant.
65e0f5d466c1381729e12e1a
92
A crucial aspect central is the relevance of multiple scales: phenomena occur at various length-and timescales, some of which decouple. This simplifies the tackling of complex systems: to build simpler models and focus solely on the relevant degrees of freedom. Scale separation takes its roots in renormalization group theory, and with significant implications in various aspects of theory (e.g., scaling concepts in polymer physics) as well as computer simulations (i.e., multiscale modelling). Figure illustrates the benefits of multiscale modelling for two applications: high-throughput screening of drug-membrane permeability, and a hierarchical description of polymeric organic electronics. Charges are transported primarily along the backbone of the chains, while the aliphatic side chains are needed to process the material. Panel reproduced from .
65e0f5d466c1381729e12e1a
93
Soft-matter science has gone through substantial evolution in the last half century. In polymer physics, experiments and theory have worked hand in hand early on to measure coveted critical exponents, and link to general statistical mechanics theory. Computer simulations have played an increasingly important role-they offer invaluable microscopic detail and reach out to ever-growing system sizes , . They combine basic generic concepts with specific material properties. In the last decade, data-driven methods, and more recently machine learning (ML), have become increasingly popular in soft matter. They offer an inductive approach to help bridge the scales, and more broadly solve complex structure-property relationships.
65e0f5d466c1381729e12e1a
94
Though the penetration of ML in soft matter has been lagging against hard condensed matter, more recent developments show that the outstanding challenges faced by conformational flexibility (i.e., the role of entropy) are increasingly being addressed. In accordance with other fields of physics, chemistry, and materials science, the pursuit of stronger inductive bias (i.e., building physics in the model) systematically helps build better models in an area that is notoriously scarce in data-experimental or from computer simulations. The continued development of ML techniques for soft-matter physics, and the cross-penetration of ML with multiscale modelling, is helping push soft-matter physics toward higherprecision predictive modelling, soft-materials design and optimization, and reproducing entire experiments on the computer .
65e0f5d466c1381729e12e1a
95
1. The foremost challenge is tackling the "black box" nature of complex machine learning models like deep neural networks. Why does a machine learning model make certain decisions? To this end, interpretability and explainability are paramount. Important developments have been made in the direction of symbolic regression, thereby discovering mathematical equations governing the complex phenomena characteristic of soft matter systems. Still, more effort is needed to gather further insight and intuition behind the underlying physics. 2. What makes soft-matter systems fascinating is also what makes them challenging: their multiscale nature. The aggregate effect of many small parts often sums up to large-scale supramolecular behavior-this emergent phenomenon is an outstanding challenge to effectively learn in ML models, and adequately generalize. This is the main reason why computer simulations remain essential nowadays and cannot easily be replaced by ML models alone. Looking to the future, the fusion of ML with physics-based simulation methods (e.g., molecular dynamics or Monte Carlo methods) is expected to have a strong impact. 3. Furthermore, navigating non-equilibrium dynamics stands as a colossal challenge. Almost all soft matter systems-including all of life-exist far from equilibrium. Worse, even systems that appear in equilibrium typically depend on non-equilibrium effects via their processing: the mere preparation (e.g., synthesis and subsequent treatment) impacts the final product . The absence of a well-established theory for non-equilibrium statistical mechanics can be an opportunity for inductive methods.
65e0f5d466c1381729e12e1a
96
Compared to hard condensed matter, soft matter lags behind in terms of ML integration, in large part due to the need to address the associated conformational flexibility. One outstanding challenge lies at the level of system representation, i.e., how to encode the fluctuating system configuration for input to an ML model. Atomic representations developed for electronic properties have focused on single configurations (e.g., ). Here instead, observables are averaged over a typically very broad Boltzmann distribution of configurations. Much less work has been proposed in the context of ensemble-averaged ML representations, though ideas have been proposed , .
65e0f5d466c1381729e12e1a
97
Capturing multiscale phenomena lies at the heart of soft-matter physics-from microscopic molecular architecture to mesoscopic structure, to macroscopic behavior. Limitations in the generalizability of ML models strongly limits the current prospects of replacing physics-based models. It is not so clear how extensive the training of an ML model ought to be to reproduce emergent phenomena, such as the selfassembly of soap bubbles from amphiphilic molecules. Coarse-grained modelling has been at the forefront of soft-matter simulations-it exploits scale separation to focus on the most relevant degrees of freedom. Advances in combining coarse-grained modelling with ML is key to further develop data-driven softmatter simulations. Much work is currently focused on ML-based coarse-grained potentials , ,
65e0f5d466c1381729e12e1a
98
It is difficult to overstate the significant impact of first theory, and later computer simulations, on our understanding of soft matter. Bringing soft matter to the fourth paradigm of science (i.e., data-driven methods) will require the tackling of several outstanding challenges. The ongoing developments of machine learning will hopefully continue to naturally evolve from hard condensed matter to soft matter, thereby addressing the needs to model configurational entropy. We foresee that these technical hurdles may help usher soft matter in a new era, where poor scale separation can be efficiently addressed, and insight can be gained for phenomena that are too complex for traditional methods.
65e0f5d466c1381729e12e1a
99
Mathematical modeling plays a pivotal role in the study of continuum mechanics and material design, offering profound insights into material behaviors and microstructures, which in turn, support and guide material optimization and design. Typically, this modeling process involves formulating partial differential equations (PDEs) based on fundamental physical principles such as mass and energy conservation as well as force equilibrium. These PDEs, for given initial conditions and boundary values, are subsequently solved using numerical methods, with finite element and spectral methods being popular choices. Unfortunately, these traditional numerical techniques are computational very costly, a challenge that becomes particularly pronounced when dealing with design studies that require a multitude of simulations under varying configurations. To address this computational burden and streamline the design cycle, there is a pressing need to develop surrogate models that can replace the traditional simulations, often reliant on the methods mentioned above, like finite elements, spectral methods, or finite volume techniques. These surrogate models are particularly valuable during the design phase, offering a more computationally tractable solution.
65e0f5d466c1381729e12e1a
100
The use of artificial neural networks (ANNs) in surrogate modeling, driven by advances in machine learning and deep learning, has therefore become a field of growing interest. While neural networks-based material modeling can be traced back to , it is in the last decade, with the rapid progress in deep learning and the availability of powerful hardware, that the development of surrogate models using ANNs has surged and continues to expand. Data utilized for constructing these surrogate models can comprise a combination of experimental data, empirical knowledge, and synthetic data generated through numerical solvers. Within the realm of continuum mechanics, numerous methodologies have emerged for building surrogate models using ANNs. For example, in a neural network architecture, namely, conditional generative adversarial networks, has been employed to predict stress and stress fields for a given microstructure geometry; employed a convolutional neural network (CNN) to estimate von Mises stress for microstructures consisting of isotropic elastic and elastoplastic grains, within microstructures, with extensions to heterogeneous periodic microstructures containing inelastic crystal grains in as depicted in Fig. . Furthermore, explores the application of the Fourier Neural Operator (FNO) for the surrogate modeling of stress and strain in heterogeneous composites. The figure is modified from and with permission.
65e0f5d466c1381729e12e1a
101
Often, surrogate modeling is conducted purely based on large amounts of data, mostly by training ANNs with them. However, within the context of continuum (micro-)mechanics, there exists a wealth of established physical and empirical knowledge that ANN-based surrogate methods have yet to fully incorporate. In the following, we discuss the notable challenges in bridging this gap for surrogate modeling in continuum mechanics. a) Physics-enhanced surrogate modeling: Incorporating physics-based knowledge into surrogate modeling is an active research field. For instance, in and , physics-based knowledge, including the underlying PDEs and empirical knowledge, has been leveraged to introduce biases into ANNs, resulting in outputs that approximate the underlying physics, such as enforcing divergence-free conditions as well as mass and energy conservation. However, it is essential to note that these approaches primarily aim to satisfy the physical laws in a weak sense. Hence, the output from the trained ANNs may not be fully physically meaningful, particularly at a local scale. Therefore, we need to explore the design of neural network architectures that are capable to inherently produce an output that satisfies physics in a strong sense, with a particular focus on critical properties like divergence-free behavior, as well as mass and energy conservation both, on a global and local scale. b) Stable dynamic prediction: Surrogate modeling has been used for predicting time-dependent stress and stress fields of heterogeneous solids subject to homogenous steady-state external loading conditions. Within this framework, these surrogate models can be regarded as dynamical systems. Given that surrogate models typically emulate stable physical behavior thereby mimicking the basic rules of continuum mechanics, it is essential for them to possess inherent stability, i.e. mimicking also convergence. This stability ensures that predictions remain consistently stable and bounded. Consequently, it is imperative that ANN-based surrogate models are designed to have these stability properties inherently embedded. c) Learning low-dimensional latent representation: Often, the field of interest in continuum mechanics is two or three-dimensional real space, ideally also informed by the solid's crystal and phase state, adding further dimensions and anisotropy features to the problem to be solved. Consequently, the data obtained for these scenarios are high-dimensional, especially while dealing with high-resolution spatial fields. However, it is a common observation that such highdimensional solutions can often be accurately represented in a low-dimensional latent space. The creation of this low-dimensional space is further guided by constraints designed to simplify the dynamics and engineering design processes. For instance, it is possible to construct a latent space in such a way that the system dynamics evolve in a nearly linear fashion, aligning with principles like Koopman theory and dimensionality reduction techniques.
65e0f5d466c1381729e12e1a
102
In our pursuit of designing neural network architectures that inherently adhere to physical properties (e.g., divergence-free, energy preserving), we seek to utilize fundamental mathematical vector calculus. For illustration, in order to design ANNs to produce divergence-free quantities, we seek to obtain intermediate quantities so that divergence-free quantities are obtained by taking the curl of those intermediate quantities. Such techniques find widespread use in solving PDEs (e.g., Maxwell equation) with divergence-based constraints. Additionally, for achieving stable time evolutions through neural networks, we extend concepts proposed in to encompass high-dimensional spatial and temporal data. Furthermore, our empirical studies indicate that CNNs that explore local features underperform compared to FNO, which explore global features present in the data. Therefore, our exploration centers on incorporating these physical properties within the context of FNO. We further need to explore how these trained networks are used for engineering studies such as predicting optimal material property configurations, drawing inspiration from . What is more, we seek to discover suitable low-dimensional latent representation through autoencoders with the intent to simplify the task of predictions and engineering studies. Algorithmic developments in this direction have been pursued in , which requires further investigation in the context of continuum mechanics.
65e0f5d466c1381729e12e1a
103
We conclude by emphasising that it is imperative to develop new machine learning and deep learning methodologies for tackling problems pertaining to the continuum mechanics of heterogeneous and anisotropic solids that adhere to the strong forms of essential physical principles both on a global and on a local scale. Doing so offers several advantages: firstly, it enhances the interpretability and generalizability of machine learning-based surrogate models. Secondly, it reduces the amount of required training data. Thirdly, it can enhance solver performance by up to several thousand times compared to conventional solution methods such as FEM or spectral methods. As an initial endeavour in this direction, we have demonstrated how to construct machine learning surrogate models that inherently produce divergence-free stress fields, thereby satisfying mechanical equilibrium conditions. Learning suitable lowdimensional latent representations not only reduces online inference time but also facilitates engineering studies with minimal computational resources. Additionally, acquiring training data for engineering applications is both economically expensive and time-consuming. Therefore, it is crucial to devise strategies for cleverly gathering training data, ensuring that the limited data covers a wide range of parameter space.
65e0f5d466c1381729e12e1a
104
Most modern engineering materials exhibit a complex microstructure that underpins the properties of the material in beneficial -or sometimes detrimental -ways. This applies to structural alloys, to ceramic materials like concrete or protective coatings, as well as to functional materials for energy storage, electronics, heterogeneous catalysis, etc. Steels, for example, consist of several meta-stable phases formed during casting, thermo-mechanical processing, or in operation. To image the interplay of grain morphology and texture, chemical composition, crystallographic relationships, and local properties of distinct regions, various complementary 'imaging' experiments are available, such as electron microscopy, atom probe tomography, beam diffraction (electron, X-ray, synchrotron), or spatially resolved spectroscopy. Thanks to progress in experimentation, data storage capacities, and digital data processing, these techniques yield an ever-increasing data pool. A single experiment can provide GBs or even TBs of data, which is further multiplied by high-throughput experimentation or in situ monitoring of transformations that add a time dimension. This big data is both a challenge and a great opportunity for data-driven research.
65e0f5d466c1381729e12e1a
105
So far, it is mostly up to human experts to identify the microstructural features of interest within the experimental data. Often, it is not clear a priori what features relate to performance in the applied context. Once identified, one would like to quantify their number density, size distribution, chemical characteristics, and functional properties, in order to extract quantitative processing-microstructureproperty relationships that facilitate material design. To automate this process, pattern recognition algorithms are actively being developed , often specifically targeted at a particular experiment for a particular type of material. Upon success, they provide a secondary characterization of the material in a reproducible and scalable way. This becomes particularly attractive when combined with high-throughput experimentation to systematically explore a material space.
65e0f5d466c1381729e12e1a
106
Merging such derived data, possibly even from different experiments, with traditional materials' characterization across multiple samples while tracking their synthesis and processing history alongside necessitates a careful data management. Electronic lab books , integrated work-flow environments , structured material databases , and flexible data sharing platforms cover some, but not all aspects. The barriers between them effectively limit data-driven material's design.
65e0f5d466c1381729e12e1a
107
Suitable algorithms for pattern recognition are available from other fields, but must be adapted to a specific research question, see Figure . Exploiting domain knowledge to define suitable descriptors and selecting robust algorithms will remain a scientific challenge in the coming years due to the vast variety of relevant phenomena and patterns. The actual integration of automatic microstructure evaluation in research practice is still at infancy. Progress is presently hindered by:
65e0f5d466c1381729e12e1a
108
(1) a lack of established data and file formats. Experimental raw data is typically acquired in instrumentspecific file formats. Extracting all potentially relevant data for machine-learning workflows is often not possible or impaired. Community efforts have been undertaken to establish open data formats . An alternative effort aims at read-function libraries that support multiple formats . For storing analysis output, ideally in conjunction with input data, no standard exists. Similarly, exchanging data between different data management systems is severely hindered by inherent heterogeneity in data structures and metadata, in the naming and unit convention of data fields, and by assuming implicit context (e.g., providing an instrument's name rather than its measurement parameters).
65e0f5d466c1381729e12e1a
109
(2) a lack of flexible workflows or tool chains. Material science research routinely combines different characterization methods, but rarely so in a digitally integrated way. Researchers have their individual ways to document a material's synthesis and processing history, how each experiment's specimen was prepared, and how data was post-processed. Common approaches (via file name, free-form notes, folders, …) are ill-suited for automatic processing. Electronic lab-book systems help to manage those data , but typically reach their limits in collaborations across labs.
65e0f5d466c1381729e12e1a
110
(3) a cultural gap between experimentalists accustomed to graphical user interfaces (GUIs) and programming-oriented data scientists. Present-day analysis strongly relies on humans to inspect the data. Instrument manufacturers therefore provide monolithic visualization tools with a GUI, that read the instrument's raw files, provide a fixed set of processing schemes and export results in established generalpurpose image (jpg, tiff) or data formats (csv, hdf5) that drop context. In contrast, the wider machinelearning field thrives on plugging together open-source libraries and code snippets on demand, that require significant coding skills.
65e0f5d466c1381729e12e1a
111
To reconcile the cultural gap, today's interactive data visualization and future advanced data processing must be interlinked. GUI-based visualization tools could open up by establishing plug-in mechanisms to exchange data and visualization items with external modules. An alternative route, that circumvents the GUI integration challenges, is to follow the successful model in computational material science : focus on input/output data format normalization, and employ separate tools that work with these formats for analysis and visualization, all coupled together by a managing framework, see Figure . Further efforts to standardize input, but more importantly for recurring output such as classification signatures, segmentation maps, interface location, geometric shape information, etc. are urgently needed.
65e0f5d466c1381729e12e1a
112
In this context, exploiting automatic code generation from machine-readable data format definitions -in conjunction with ontologies and knowledge graphs -could be a game-changer to speed up the development, as they reduce the human effort in defining standards and implementing corresponding code for possibly different programming languages. Similarly, the trend towards higher abstraction in machine-learning software should be exploited to generate processing metadata. When the transformation chain is built at run-time via high-level objects (which later generate the actual code for the hardware at hand on the fly), the high-level representation should automatically annotate the data output with the details of the processing chain. At a higher level, workflow and data management tools must be adapted to deal with the specific challenges of experimental data. As experimental data sets can become very large, moving or copying around entire data sets is prohibitive. In most cases, raw data will be stored close to where it was generated. Computational resources for advanced machine learning might be located elsewhere, and only need part of the data, or specifically pre-processed data that reduces transfer size via dimensionality reduction or compressed sensing. Thus, workflows that deal with both distributed data and distributed computation will be needed, while maintaining consistency in metadata and ensuring that data access across computer systems is reliably authenticated to avoid premature publication or leakage of confidential data.
65e0f5d466c1381729e12e1a
113
The success of data-rich imaging techniques in material science lies in the promise that materials' properties are linked to recurring patterns that can be discovered by inspecting a few representative examples. Machine-learning techniques can leverage this approach by removing the human inspection as the limiting factor to digest larger and larger amounts of data in order to discover relevant, but possibly rare patterns. At the same time, they offer the unique chance to characterize the underlying distributions in a statistically significant manner as more data becomes available, thus generating secondary high-level characterizing data that might serve as valuable descriptors for associated properties. Digitalizing the entire workflow from synthesis, sample preparation, data acquisition and post-processing in an integrated way as sketched in Fig. is critical to achieve these goals.
65e0f5d466c1381729e12e1a
114
Dierk Raabe 1 , Zongrui Pei 2 , Junqi Yin 3 , James Saal 4 , Kasturi Narasimha Sasidhar 5 , Jörg Neugebauer Such databases can then be used by other machine learning (ML) methods. With their direct role in materials discovery we mean that LLMs can even extract causal relationships from collected data, serve to build domain-specific knowledge graphs, render hypotheses and guide progress-critical experiments, data collection and simulation (1-3). The latter aspect is essential because LLMs do not obey any built-in causal rules. Instead, they connect language tokens in a probabilistic way, without considering logic, selfconsistency or conservation laws. This means that they can violate elementary scientific rules. They mimic scientific context by using probability measures that rest on majority but not on proof or logic. This explains why there are opportunities but also pitfalls. The latter can be mitigated by combining LLMs with other methods such as classical theory, thermodynamics, kinetics, materials property data bases, explainable artificial intelligence, active learning etc. LLMs are also capable of generating hypotheses and they can be used to build domain-specific knowledge graphs which in turn can enhance predictive models .
65e0f5d466c1381729e12e1a
115
Materials science stands at the confluence of several disciplines. Research topics range from latest quantum mechanical insights into the behavior of electrons in complex systems to large-scale processing of billions of tons of material (concrete, steel) and materials exposed to harsh environmental conditions (catalysts, corroding products). Developing data-centric methods to leverage disruptive progress in this field must, therefore, reflect and embrace this heterogeneity in the underlying data from which knowledge can be extracted, combined and used.
65e0f5d466c1381729e12e1a
116
In the portfolio of model-based artificial intelligence (AI) methods, LLMs seem to offer new opportunities to discover materials and processes that may otherwise remain hidden in the complexity and scattered information that already exists . One avenue to use LLMs is accelerated materials discovery . This is due to the fact that language-based token systems that connect words based on probability are particularly strong in extracting and combining knowledge that already exists in text form. Therefore, while LLMs may not be necessarily suited for disruptive conceptual discoveries from text connections, they can accelerate design based on existing concepts . Although this is a rather conservative approach, it is already a big step forward, because the traditional trial-and-error approach of material discovery is time-consuming and resource-intensive. Also, LLMs can analyse vast datasets, extracting patterns and correlations that would elude human researchers. For instance, LLMs can process published literature, patents, and experimental data to suggest combinations of novel material compositions and even possible properties, as will be shown below in more detail. By integrating databases like the Materials Project or the Cambridge Structural Database, LLMs can offer quantitative predictions about material structures, compositions, and potential applications, significantly reducing the time from conception to application.
65e0f5d466c1381729e12e1a
117
However, it should also be noted that Krenn and Zeilinger recently suggested a more disruptive approach to use LLMs. They introduced SemNet, a dynamic knowledge organization method in the form of a continuously evolving network, constructed from 750,000 scientific papers dating back to 1919. Each node in SemNet represents a physical concept, and a link is established between two nodes when the concepts are jointly explored in articles. SemNet has proven its utility by enabling the authors to pinpoint influential research topics from the past. The authors trained SemNet to forecast trends in quantum physics, and these predictions have been validated using historical data.
65e0f5d466c1381729e12e1a
118
A few examples of using LLMs in materials science have been recently presented. Jablonka et al. ( ) conducted a hackathon using LLMs such as GPT-4 for chemistry and materials science. The participants leveraged LLMs for a variety of purposes, such as predicting properties of molecules and materials, creating new tool interfaces, extracting knowledge from unstructured data, and developing educational applications. Being more specific, An et al. argued that the construction of knowledge graphs for domain-specific applications like metal-organic frameworks (MOFs) can be resource-intensive. LLMs, particularly domain-specific pre-trained models, have been successfully employed to create such graphs.
65e0f5d466c1381729e12e1a
119
For example, a study explored the use of state-of-the-art pre-trained general-purpose and domainspecific language models to extract knowledge triples for MOFs . The authors constructed a knowledge graph benchmark with 7 relations for 1248 published MOF synonyms. Experimental probing revealed that such domain-specific pre-trained language models (PLMs) outperformed general-purpose PLMs for predicting MOF related triples. The authors also conceded from their overall benchmarking results that the use of PLMs alone to create domain-specific knowledge graphs is still far from being practical and requires the development of better-informed PLMs for specific materials design tasks. The group of Olivetti used LLMs to generate knowledge graphs (MatKG2) for the entire domain of materials science, taking ontological information into account as opposed to using statistical co-occurrence alone . Zhao et al. ( ) used fine-tuned Bidirectional Encoder Representations from a Transformer (BERT) model and tested it with respect to data extraction from published corpora. They reported that the model achieved an impressive F-score of 85% for the task of materials named entity recognition. The F-score is a metric used to evaluate the accuracy of a model in binary classification tasks. Sasidhar et al. ( ) integrated natural language processing and deep learning for the design of corrosion-resistant alloys . They also highlighted the general challenges in utilizing textual data in machine learning models for material datasets and proposed an automated approach to transform language data into a format suitable for subsequent deep neural network processing. This method significantly improved the accuracy of pitting potential predictions for alloys, providing insights into the critical descriptors for alloy resistance, like configurational entropy and atomic packing efficiency. Pei et al. proposed a concept of 'context similarity' to select chemical elements with high mutual solubility for discovering high-entropy alloys. They trained a word-embedding language model with the abstracts of 6.4 million papers to calculate the 'context similarity'. With this approach they designed a workflow to design lightweight high-entropy alloys, which suggested even 6-and 7-component lightweight high-entropy alloys by finding nearly 500 promising alloys out of 2.6 million candidates.
65e0f5d466c1381729e12e1a
120
Gupta et al. developed MatSciBERT, a materials domain-specific language model for text mining and information extraction. They argued that conventional language processing alone, such as encoded in the form of BERT models, may not yield optimal results when applied to materials science due to their lack of training in materials-specific notations and terminology. To address this challenge, the authors introduced a specific materials-aware language model they refer to as MatSciBERT. This model was trained on an extensive corpus of peer-reviewed materials science publications. The authors claimed that their model surpasses SciBERT, a large language model trained on a broader and less materials-specific scientific corpus, in three critical tasks, named entity recognition, relation classification, and abstract classification.
65e0f5d466c1381729e12e1a
121
The developers made trained weights of MatSciBERT publicly accessible, enabling accelerated materials discovery and information extraction from materials science texts. A recent study introduced a larger GPT version, named MatGPT , based on a larger scientific corpus than MatSciBERT. In their study the group claim that the MatGPT model embeddings outperform MatSciBERT and achieve an improved band gap prediction based on the Materials Project combined with graph neural networks (GNN).
65e0f5d466c1381729e12e1a
122
The current flagship in the world of LLMs is the Generative Pre-trained Transformer 4 model (GPT-4) from OpenAI. It is based on 8 separate models, each containing dozens of network layers and 220 billion parameters, which are supposedly linked together using the Mixture of Experts (MoE) architecture. GPT-4 is built on a transformer architecture, combining self-attention and feed-forward neural networks to process input tokens. Each token represents a text string containing a word or phrase. Therefore, the token limit represents the amount of text that an LLM can consider at a given time as input. Early LLM releases had very low token limits since LLM calculation time is strongly dependent on the token length. Initial releases of GPT3 had a token limit of 2,048 tokens, but recent releases of GPT-4 has a token limit of 128,000 tokens. To put this into context, the average length of a PubMed abstract is 114 tokens (sd 48.83) and an article is 2,378 tokens (sd 1,604.79). So while increasing complexity of the LLMs has enabled using entire papers (or even groups of papers) as input, there is still the computational cost of running GPT-4 calculations to consider. As an example, when asking a question of medium complexity via a string of fewer than 10 tokens, such as 'Composition and property ranges of material XY', then the rough total cost estimate to answer this question for GPT-4 is about 7-10 Euros. Getting the same answer from a classical knowledge graph would incur only about one-hundredth of this cost and also take less time, provided the information is in the corpora and mapped in a graph accessible by search engines.
65e0f5d466c1381729e12e1a
123
Using knowledge graphs also removes the hallucination effect, an error made by LLMs when rendering combinations that appear plausible to the model's probability measures but false when tested against high-fidelity information or logic. It appears due to multiple factors, such as when LLMs are trained on contradictory datasets, overfitting, etc. An urgent and vital topic in LLMs is, therefore, quantifying the level of the hallucination effects and developing a systematic method to recognise and mitigate them. On the other hand, LLMs have the advantage that they can process and understand the context from scientific literature, patents, and database entries. When combined with knowledge graphs that structure this information, it provides a rich database of materials science knowledge which can be readily queried. This integration allows for the rapid assimilation of existing knowledge and the identification of knowledge gaps.
65e0f5d466c1381729e12e1a
124
Vice versa, LLMs, with their ability to process and generate large volumes of text, can also serve to construct domain-specific knowledge graphs, optimize algorithms for faster discovery, and enable more efficient design and exploration of materials. The synergy between LLMs and knowledge graphs could hence be a useful next step to materials discovery, offering a paradigm shift from traditional, iterative experimental methods to a more quality-controlled data-driven model. This combination would allow better alignment of reliable high-quality data exploitation (through knowledge graphs) and semantic contextualization (through LLMs).
65e0f5d466c1381729e12e1a
125
LLMs can also analyse patterns and relationships within a knowledge graph to generate hypotheses-based suggestions for suitable search spaces pertaining to potentially novel materials and properties. For instance, by understanding the relationship between crystal structure and electronic properties, LLMs coupled to knowledge graphs could likely be used to suggest new compositions or corresponding search spaces for magnets, battery materials or solar cell absorbers.
65e0f5d466c1381729e12e1a
126
While the opportunities are vast, applying LLMs in materials science also has challenges. Data quality and availability are critical as models are only as good as the data they train on. Ensuring data integrity and representativeness is paramount. Furthermore, the interpretability of LLM outputs is crucial for gaining trust in their predictions. Developing models that can provide not just predictions but also insights into the underlying mechanisms is an essential goal. Another point is the Chain of Thought Prompting, an approach to enhance LLMs' comprehension of causal relationships and reduce hallucination. It involves forcing the models to verbalize different steps of reasoning they have gone through in reaching conclusions. This makes the process more transparent. Such ideas have not been implemented in materials science but in other areas such as medical science .
65e0f5d466c1381729e12e1a
127
The quality of the information that can be extracted from LLMs depends on the quality and timeliness of the input text. For material science that can be only achieved if the latest literature that has been going through proper peer review processes is being used. However, only one-third of the current scientific corpora is open access. Therefore, some of the corpora currently used for training LLMs is in part of questionable quality. Also, current LLMs might simply miss the latest literature. This means that the model weights are not fitted to the latest state of the art. These two aspects show that fine-tuning prior to the use of such LLMs is recommendable. On the other hand, recent literature sometimes also overlooks knowledge that already exists long in the literature so that some findings reported in papers are more like re-discoveries, a problem that can be likely mitigated when LLMs are used. In this context, is it worth that Application Programming Interfaces (APIs) being now offered by a few companies to allow accessing millions of publications along with metadata. Another issue is that extracting text from PDF files, the standard format of the literature, results in poorly formatted corpora with numerous errors (e.g., missing text, insertion of text from other items such as tables in the middle of sentences, headers and page numbers, etc.).
65e0f5d466c1381729e12e1a
128
An unresolved open front of LLMs is the potential violation of existing copyright when tapping into webbased resources, which becomes an obvious issue with the use of journals, textbooks, and other scientific literature in training. Another concern is if further tuning of LLMs leads to slow asymptotic knowledge increase because high-quality peer-reviewed content on certain topics is not growing at a sufficiently high rate and is often not freely accessible for training. In other words, it is not likely that LLMs can gain knowledge quicker than the generic basic research used to train them. To meet both challenges, the rapidly growing fraction of open-access literature and the use of pre-publication and self-archiving services is of great value, likely leading to higher quality improvement and less hallucination of LLMs. Some of these aspects also connect to general limit considerations regarding model capacity and scaling laws, which were recently shown to depend essentially the number of model parameters, the size of the dataset and the amount of computation power used for training. Performance was shown to depend less on other architectural hyperparameters such as depth and width. However, irrespective of these theoretical considerations, the scientific community has not yet seen the capacity limits of the GPT model in current applications. This means that for the same data size, the GPT model improved further as the number of parameters was further increased.
65e0f5d466c1381729e12e1a
129
LLMs offer great potential in the complex interplay between advanced computational methods and the nuanced, often experimentally and empirically grounded field of materials science. Opportunities lie in accelerated material discovery; enhancement and improved pattern and result analysis of data obtained from existing computational tools such as atomistic simulations; better knowledge synthesis and data management from research articles, reports, and property studies; support in hypothesis development and outlier analysis; and advanced decision-making support in materials selection and design, including aspects such as costs, sustainability and regulatory constraints. Pitfalls exist regarding the quality, availability, bias and legal status of the training data; lack of built-in logic or conservation laws; lack of the reflection of microstructure, synthesis, sustainability and processing complexity; and the danger of overreliance and even complacency regarding LLM predictions, i.e. the decay of individuals' own critical thinking, rigorous validation or falsification and the thrive towards deep understanding of the underlying causality behind phenomena which are key factors that have made the scientific method the most successful and reliable approach in history.
65e0f5d466c1381729e12e1a
130
This contemplation about a few generic pro and con aspects shows that while LLMs offer transformative potential in materials science, their successful integration into the field necessitates careful consideration of the quality and completeness of the data they are trained on, a thorough understanding of the underlying physical and chemical principles, and a balanced approach to leveraging their computational power with critical human expertise.
65e0f5d466c1381729e12e1a
131
Thomas A. R. Purcell 1,2 , Luca M. Ghiringhelli 1,3 , Christian Carbogno 1 and Matthias Scheffler 1 1 The NOMAD Laboratory at the FHI of the Max-Planck-Gesellschaft and IRIS-Adlershof of the Humboldt-Universität zu Berlin, Germany 2 University of Arizona, Biochemistry Department, Arizona, USA 3 Department of Materials Science and Engineering, Friedrich-Alexander Universität, Erlangen-Nürnberg, Germany
65e0f5d466c1381729e12e1a
132
Computational, high-throughput materials discovery is seen as a promising route to advance a myriad of technologies including batteries , renewable energy , and pharmaceuticals . With the increasing amount of computer power over the past several decades, millions of materials' properties were calculated on hundreds of thousands of materials, with the aid of high-throughput workflow. Such workflows allow a user to define a set of calculation parameters and run those calculations for a large set of materials. The results then populated several large databases, e.g. Materials Project, AFLOW, Open Quantum Materials Database, NOMAD, etc. . However, as the materials space is practically infinite, such studies can only address a marginal part of it, even for relatively simple properties.