doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.07522
18
10 Statistical Data-driven Symbolic Model-driven Figure 2: Bubble landscape of current approaches to AI from and for science. Bubbles may occur more than once when related to several larger domains. Some approaches may have alternative names or have been re-branded in certain contexts. Neuro-symbolic models have sometimes been referred to as ’intuitive’ while some statistical-driven approaches have been labelled as ‘cognitive computing’. Generative AI (GenAI) has made little to no contributions to fundamental science so far but has great potential. Large Language Models (LLMs) may significantly tap into and contribute to the exploratory capabilities of the scientific hypothesis space, given their capabilities to process human language in which all human science has been written. GenAI and LLMs are approaches of statistical nature, but it remains unexplored to what extent they may develop symbolic capabilities from statistical (e.g. linguistic) patterns. 11
2307.07522#18
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
19
11 Most of these algorithms fail to be scalable in domains outside the training set. Such algo- rithms lack mechanisms for abstraction and logical inference, they fail at generalisation (57). For example, in the case of driverless cars, one does not want a car to crash millions of times to learn how not to crash, so current techniques such as adversarial networks offer a way to produce examples in which not driving appropriately can lead to an event that is labelled a crash (58). However, driving and crashing are events where cause and effect need to be learned, which current approaches cannot do. When AI leads science so that laboratory experiments are automated to execute cycles of planned experiments, AI frees humans from repetitive, tedious, and error-prone tasks and can deal with vast amounts of data that no human could handle (59). These human scientists, in turn, can feed the AI systems back with new insights and novel theories. Thus, such an emerging feedback loop of AI-human collaboration will synergistically boost scientific discovery toward previously unattainable results, rigour, and dissemination.
2307.07522#19
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
20
To overcome the above limitations and challenges, we claim that it will require the fostering of new theories and methods, as well as human and technological resources in AI, data science, and interdisciplinarity, so scientists become capable of dealing with this AI-human interplay both at an infrastructural and a metastructural level. One of these theories may involve develop- ing mathematical frameworks that can deal with the fact that not only the empirical findings, but also the new theories themselves the scientists within the loop are devising can be influenced by other AI algorithms within the loop, and vice versa. For example, this may require causal analysis (or inverse-problem solving) (60) when both the observer and the observed system are mutually perturbing the underlying generative model of each other (61, 62). One of these meth- ods may involve AI that guides AI, and translates results to humans, and this intermediate AI may not be of the same type. For example, causal and model-driven AI (63,64) may be required to disentangle other AI systems to which human scientists cannot relate if they do not have a 12
2307.07522#20
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
21
12 mechanistic explicative component, whether there is one or not. This may lead to some sort of meta-AI capable of dealing with knowledge representations at a meta-level (43), which includes the network dynamics of the each agent (whether AI or human) in the loop, so that this meta-AI still remains explainable to humans (65). This may not require Artificial General Intelligence but would require a different set of skills than purely statistical machine learning approaches. # Historical Context
2307.07522#21
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
22
# Historical Context Applications of AI in science are quite broad and cover many fields. The idea of automating reasoning goes back to Leibniz, where the modern incarnation can be traced back to efforts to build computing machines in Europe. In particular, the heroic efforts of Alan Turing’s work at Bletchley to automate the problem of code breaking and his ideas of an imitation game (66,67). It can also be traced back to Joshua Lederberg (Nobel laureate) (68), Ed. Feigenbaum (Turing award winner) (69), Karl Djerassi (co-inventor of the contraceptive pill) (70), and colleagues at Stanford in the 1960s, who worked on automating mass-spectroscopy for the Viking Mars lander (71, 72). AI has long been a tradition of taking scientific discovery as an area of study. In the 1970s the Nobel Prize laureate and Turing prize winner Herbert Simon developed Ba- con, an AI system for science (73). Since this pioneering work, much has been achieved, and there are now many convincing examples of AI systems making clear contributions to scientific # knowledge (e.g. the very recent (74, 75)).
2307.07522#22
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
23
# knowledge (e.g. the very recent (74, 75)). Eurisko (76) and Cyrano (77) are two examples of other attempts to perform automated discovery from basic principles in a variety of technical fields, in particular in mathematics, chemistry, and a few other domains. These are systems that can be viewed as heuristic search systems, with the additional advantage that they can reconfigure their own search space. Some commercial products are specifically designed to be applied to knowledge and sci- entific discovery. For example, DataRobot (78) promotes Eureqa (79), having acquired Nu13 tonian (80–82). Eureqa was designed to create models from time series data and is based on creating random equations from mathematical building blocks through evolutionary search to explain the data (81). It has been called a “Virtual Data Scientist” (79). A team of researchers from Google DeepMind launched a machine learning project called
2307.07522#23
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
24
A team of researchers from Google DeepMind launched a machine learning project called AlphaFold in 2018 to participate in the Critical Assessment of Techniques for Protein Struc- ture Prediction or CASP (83). CASP is a biennial competition that assesses state-of-the-art three-dimensional protein structure modelling. In its first version, AlphaFold was particularly successful at predicting the most accurate structure for targets rated as the most difficult by the competition’s organisers, but it was not until the second program, AlphaFold 2, in 2020, when the team achieved a level of accuracy much higher than any other group before and scored above 90 for around two-thirds of the proteins in CASP’s global distance test (GDT), a test that measures the degree to which a structure predicted by a computational program is similar to the structure validated experimentally, with 100 being a complete match. AlphaFold relied on a lot of human knowledge already generated in the years before, especially in areas such as molecular dynamics. The program was designed to include the expert domain in the form of the training data. How much molecular biological knowledge was introduced is still not known, but while it required a team that did draw heavily on domain expertise to tune it, most of the predictive power came from the AlphaFold 2 tool itself (75, 84).
2307.07522#24
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
25
predictive power came from the AlphaFold 2 tool itself (75, 84). A precursor of AI in physics is the project GALILEO (Guided Analysis of Logical Inconsis- tencies Leads to Evolved Ontologies) (85). The GALILEO project tried to model the repair of faulty theories of Physics whose predictions were contradicted by empirical evidence. One area of successful application of machine learning from climate data, for example, was the discovery of climate dipoles through machine learning (85). Physics-driven AI has the potential to impact how we approach science, on our current predominantly data-reliant—as opposed to the model- centred—scientific method, by placing the mechanistic model at the centre of modelling itself. 14 Paradoxically, current physics-led AI and machine learning research have distracted researchers from more fundamental research, even though the discussion has started, and researchers will hopefully eventually get around to the first principles they claim to care about.
2307.07522#25
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
26
On the knowledge side, there are many applications of knowledge extraction of interest, such as for drug re-purposing by pharmaceutical companies (86, 87). On task-oriented problem solving, we can find an increasing number of workflow systems that understand scientific tasks and carry them out. There have been some success stories demonstrating that by collecting and integrating available molecular data into computational models, accurate predictions of inter- ventions in the system can actually be made. An example is the Robot Scientist program (21) that was able to autonomously execute high-throughput hypothesis-led research investigating yeast-based functional genomics, with the next-generation scientific program later using the same principles for drug screening. In another example, a computational model of Halobac- terium salinarum NRC-1 was first constructed through massive data integration and machine learning-driven inference of the regulatory network (88). Another example was the ambitious whole-cell computational model of the life cycle of the human pathogen Mycoplasma genitalium (89). The model accounted for all annotated gene functions and was validated against a broad range of data. Now, the model encompasses ap- proximately 500 genes and their interactions.
2307.07522#26
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
27
In the area of neural networks, there has been, for example, an effort to make them ‘under- stand’ cause and effect by algorithmic training. While more research is needed, fundamental research is aware that alternative approaches are required to capture the complexities of hypoth- esis and model generation or selection (44, 53, 90). In this sense, the research in this type of higher-order AI, such as deconvolution from searching for generative processes from the entire algorithmic space (60), will also be crucial to advance current research. To present a summary of the current state of AI applications to each scientific domain, Ta15
2307.07522#27
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
28
To present a summary of the current state of AI applications to each scientific domain, Ta15 ble 1 displays an organisation of scientific domains1 and the applicable AI algorithms’ classes and approaches. Scientific domains are approximately ordered from smallest physical scales to largest. Overlapping areas are not reflected in this high-level table (e.g., semi-supervised RL methods, or the representation of neural networks (NNs) that conflates various deep learning types like LSTM and Transformers), not to mention complex, context-dependent multidisci- plinarity. Table 1’s content was the consensus and understanding of a subset of this paper authors. While supervised statistical methods have contributed to almost every area of knowl- edge, these are of very different type mostly ranging from identification to classification. Some areas are more difficult than others across all approaches, such as mathematics, philosophy, and epistemology. In general, statistical approaches rank poorly at finding first principles or adding new mechanistic knowledge to scientific domains.
2307.07522#28
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
29
Generative AI (GenAI) and Large Language Models (LLMs) are promising to advance sci- ence by assimilating and synthesising the vast corpus of human knowledge embedded in scien- tific literature. Through this synthesis, LLMs can interconnect disparate ideas, construct unique hypotheses, and venture into uncharted areas of scientific knowledge. However, this exploration is bound by the data they have been trained on, creating a theoretical bubble that could lead to model collapse through excessive training on the same data. To burst this bubble, it is essential to supplement LLMs with other methods and multiple sources. For instance, active learning could serve to maximise information gain, challenging the model with fresh data and different viewpoints cross-pollinating from different scientific domains. Hybrid models blending AI with symbolic reasoning could tackle scientific problems requiring high-level abstraction, thus broadening LLMs’ capabilities. This approach would therefore fall into the neuro-symbolic category for purposes of scientific discovery. requiring high-level abstraction, thus broadening LLMs’ capabilities. This approach would therefore fall into the neuro-symbolic category for purposes of scientific discovery. 1Note that: Complexity includes systems and intelligence as defined by the Santa Fe Institute; Manufacturing notably includes ML-based design of sensors and chips; and Earth systems includes oceans, land, air, and near space (see earthdna.org). 16
2307.07522#29
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
30
16 Indeed, an area where LLMs could be especially impactful is in scientific model discovery. By analysing patterns and correlations in vast datasets, LLMs could help identify mathematical relations and possibly reveal new potential (physical, or computational) laws just as it learns language grammar from natural language statistics. This could expedite the scientific process, enabling more rapid breakthroughs. Furthermore, LLMs could make a significant contribution to causal analysis. By process- ing extensive scientific literature, they could draw links between causes and effects that might be overlooked by human researchers, proposing novel causal hypotheses for testing. Pairing this with counterfactual reasoning, where the AI predicts the outcome of modifying specific variables, could deepen our understanding of cause-effect relationships, and help simulate al- ternative model outcomes. However, in addition to inheriting the limitations from statistical machine learning in general (54, 91), it is also important to acknowledge the limitations of current LLMs. They currently lack the depth needed for any breakthrough to happen and require quality and diversity of data allowing an LLM ‘temperature’ (favouring less likely statistical patterns) to explore further along the potential long tails of the distribution of scientific results with potential breakthrough science away from incremental average science. A collaborative approach, in which human scientists guide the AI, can help harness the strengths of both worlds, mitigating the current weaknesses of LLMs and statistical ML, ensuring more effective utilisation of this technology today. 17
2307.07522#30
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
31
r e v o r G & s e d m u t h i l t p i r d m o d e g n e A s l s e i A a v v m b r i c e r u - d i l p t t e - n u e d a n a s t o u n a e M D Q G U - ∼ - ∼ - ✓ - ✓ - Mathematics - - - - ∼ ∼ ✓ - - ∼ - ∼ - ∼ - ✓ - ∼ ∼ - HE Physics - theo - ✓ ✓ ✓ ✓ ✓ ✓ ✓ - HE Physics - exp ✓ ✓ ∼ ✓ ∼ ∼ ✓ - ∼ - ∼ ∼ - ∼ - ∼ - Optics & Acoustics - - ∼ ✓ ∼ ✓ - ✓ ✓ ✓ ∼ ✓ ∼ ✓ ✓ - ∼ - - Complexity ✓ ✓ ∼ ✓ ✓ ∼ ∼ ✓ ✓ ∼ ∼ ✓ ∼ ✓ ∼ - ∼ ∼ SynBio & Ind Biotech ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Organic Chemistry ✓
2307.07522#31
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
32
✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Organic Chemistry ✓ ✓ ✓ - Physical Chemistry - - - ✓ ✓ - Electrochemistry - - - - - ✓ - ∼ ∼ ✓ ∼ ∼ ∼ - ∼ ∼ ✓ ✓ ∼ ✓ ∼ - Materials - ∼ ✓ ∼ ✓ - ∼ ✓ ✓ ✓ ✓ - ✓ - Computing Medicine, molecules/proteins ✓ ✓ - ∼ - ∼ - ∼ ∼ ∼ ∼ ✓ ∼ ✓ ∼ - - ∼ Medicine, drug development ✓ ✓ ∼ ∼ - ✓ ∼ ✓ ✓ ✓ ✓ ✓ ∼ ∼ - - - - Medicine, clinical - - - - - ✓ ✓ - - Botany & Zoology - Systems bio & epidemiology ∼ ✓ ∼ ✓ - ∼ ✓ ✓ ✓ ✓ ✓ ✓ ∼ ✓ ✓ - ∼ - ✓ ✓ ∼ ✓ ∼ ∼ ✓ - ✓ ∼ ✓ ✓ ∼ ∼ ∼ ∼ ✓ ∼
2307.07522#32
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
33
- ∼ - ✓ ✓ ∼ ✓ ∼ ∼ ✓ - ✓ ∼ ✓ ✓ ∼ ∼ ∼ ∼ ✓ ∼ Neuro and Cog sciences ✓ ✓ - ∼ - ∼ ∼ ✓ ✓ ∼ ∼ ✓ ∼ - ∼ ∼ ∼ ∼ Energy - nuclear (fis/fus)ion Energy, generation & storage ✓ ✓ - ✓ - ∼ ∼ ✓ ✓ ∼ - ✓ ∼ ✓ ∼ ∼ - ∼ ✓ ✓ ∼ - - ∼ - - - Energy, oil & gas - - ✓ ✓ - - ∼ - - Manufacturing - - ✓ ✓ - ∼ ✓ - ∼ ✓ ✓ ∼ ∼ ✓ - - ∼ - ∼ - Engineering & Industrials ✓ ✓ - ∼ ✓ - ✓ ✓ ✓ ∼ ∼ ∼ - - ∼ - - - Energy Systems ∼ ✓ - - ✓ ✓ ∼ - ∼ - - ∼ - - - Transp. & Infrastructure ∼ ✓ ∼ ✓ ∼ - ∼ ✓ ✓ ✓ ∼ ✓ - ∼ ∼ - - - Agriculture ✓ ✓ ∼ ✓
2307.07522#33
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
34
✓ ∼ ✓ ∼ - ∼ ✓ ✓ ✓ ∼ ✓ - ∼ ∼ - - - Agriculture ✓ ✓ ∼ ✓ ∼ - ∼ ✓ ✓ ✓ ∼ ∼ ∼ ✓ ✓ - - - Ecology - ∼ ∼ ✓ ✓ ∼ ∼ ✓ ∼ ✓ ∼ - ✓ ✓ ∼ - - - Socioeconomics & Markets ✓ ✓ - - - Finance - - - - ∼ - ∼ ∼ - ∼ ✓ ∼ ∼ - ∼ - ∼ ∼ - - - Politics & Geopolitics ✓ ✓ ✓ - Defense, aerospace - - - ∼ ✓ ✓ ✓ ∼ - - ∼ ✓ ∼ ✓ ∼ ∼ ✓ ✓ - ∼ ∼ Climate, weather ∼ ✓ ∼ ∼ ∼ ∼ ∼ ∼ ✓ ∼ ✓ ∼ ∼ ∼ ∼ - ∼ - Earth Systems ✓ ✓ ✓ ✓ ✓ ✓ - - ∼ ∼ ∼ ∼ ∼ ∼ - ∼ ∼ - Astrophysics & Cosmology - ✓ - ∼ - ∼ - - - - - Philosophy, Epistemology
2307.07522#34
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
35
Table 1: Scientific domains and the applicable AI algorithms’ classes and approaches. ’-’ and ✓means simply no (or unknown) and yes, respectively; ∼ implies the ML application is likely but not yet done nor sufficiently validated. This table is very dynamic and requires an update every quarter given the speed of developments impossible to keep up with without considerable effort or help from AI. 18 # Aspects of AI-Led Closed-Loop Science The ability to predict and design (inverse design), while exceptionally useful, will not necessar- ily lead to new fundamental discoveries (new theories) unless AI and human goals in scientific discovery are aligned and synergistically intertwined to impose similar objectives quantified and introduced, for example, a loss function . This is because scientific discovery cycles, such as those illustrated in Figs. 3, are not iso- lated parts but belong within a greater cycle of scientific inquiry spanning an entire topic or field comprised of a community of scientists. It is the larger learning cycle that fuels the questions in the smaller learning cycles. The larger cycle is fuelled by human curiosity and human challenges and has a strong historical and social component, but the shorter cycles, being more well-defined, they are more prone to be automated. Nevertheless, the larger cycles may be needed to kick-start the discovery process of # the smaller learning cycles.
2307.07522#35
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
36
# the smaller learning cycles. In this sense, one option to integrate human scientists and AI-driven science is for humans to build the context of the greater cycle (for example, fulfilling the role of the ‘Final Theory’ and ‘Background knowledge’ steps at the leftmost smaller cycle in Fig. 3), feeding the AI with new insights, and leave the AI to independently deal with the smaller cycles (such as the rightmost smaller cycle in Fig. 3), guided by the greater ones. The LLM’s could, for example, be very useful as a technical interface and translation of human high-level larger cycle aspirations and their respective ”divide-and-conquer” breakdown into smaller cycles. If one aims at the highest degree of automation of the discovery cycle, more sophisticated forms of AI should include automation of the validation, dissemination, refereeing, and other aspects of human science and # its practice. To tackle such challenges, we propose in the following sections the steps and technology 19 suggested to conduct an entire cycle of AI-led scientific discovery (92), as in Fig. 3.
2307.07522#36
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
37
To tackle such challenges, we propose in the following sections the steps and technology 19 suggested to conduct an entire cycle of AI-led scientific discovery (92), as in Fig. 3. Scientific knowledge reachable by Al Hypothetical space Background Hypothesis Knowledge Formation Analysis negative | positive ‘Knowledge extraction false | true | uncertain next guided search Feedback loop inconsistent | reproducible Ua\sop quaUIede3 Human representation} ‘uoResaUa8 a2paymouy Es Experiment Results Machine translat Es Selection aaa ons Robot ey uae) interpretation Inconsistencies errors ] erorstinaie | noise Figure 3: Visual representation of closed-loop full experimentation cycle for scientific discovery pathways, adapted and combining ideas from (59) and (21). LLMs can now facilitate closing this loop but require help to connect each module and process in a causal rather than only a statistical fashion. # Hypothesis Generation
2307.07522#37
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
38
# Hypothesis Generation One of the central components of the scientific practice is the ‘hypothetico-deductive’ method (93, 94). An additional set of epistemological tools is induction (95), abduction (22) and coun- terfactual reasoning (96). To automate those knowledge processes, a deduction can be combined with simulation to infer the experimental consequences of hypotheses. Matching simulation with experimental output will be a reliable basis for an AI to accept or reject a hypothesis. Such experimental output is tested with multiple interventions in the automated series of perturba- tion analyses (61). However, while one traditional approach to automate induction may follow, for example, new methods for clustering and regression, automating abduction and the creation 20 of counterfactual scenarios may pose an even more challenging problem. For this purpose, it would require the AI algorithm to explore irreducibly novel possibilities that are emergent to the current state of knowledge in which the AI is situated (62).
2307.07522#38
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
39
In this sense, neural networks are unlikely to be useful in the process of hypothesis genera- tion, nor is any statistical machine learning. This is because they need training, and not only is training over hypothesis generation exactly the problem to be solved in the first place, but train- ing over previous hypotheses, dividing them into rejected or valid, may undermine the freedom and the unbiased exploration that is desired of regions of interest in the hypothesis space. For hypothesis generation, what is needed is a bottom-up approach (e.g., a model-driven AI) or a hybrid one able to conduct cycles of systematic hypothesizing, from either partial or exhaustive enumerations (even if redundant though universal) (64, 97). A bottom-up approach that deals with this open-endedness concerning the role of novelty is the field of algorithmic information dynamics (AID) (61), a framework for causal discovery and causal analysis based on algorithmic information theory and perturbation analysis. Open-ended innovation in hypothesis generation and how to create and search over un- bounded hypothesis spaces in less well-specified domains is an open challenge in itself, where research on the topics of this document can help make progress. These spaces and the methods exploring them usually have to deal with problems of intractability or uncomputability (98, 99).
2307.07522#39
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
40
Each method has its advantages and drawbacks and lies at different extremes of the causal inference spectrum. Guiding heuristics based on first principles are needed to explore the hy- pothesis space (100). Dovetailing partial results is necessary to avoid infinitely long cycles running the search. Here aspects of computability and tractability will be in play at every step, which we will need measures to deal with unless less powerful techniques are implemented (e.g. propositional logic or domain-restricted spaces such as a set of genetic circuits). At one extreme are the statistical tools that confound correlation and causation but can help scientists 21
2307.07522#40
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
41
make a call and guide their experiments, viz., graphical models that combine probability with symbolic logic, reasoning, and interventional calculus. The statistical approach often leads to less computationally expensive methods and, although in general, they may present distortions or biases toward some selected features (52, 101), it returns sound results in cases one knows a priori that the underlying generative processes are purely stochastic, stationary and ergodic. At the other extreme is AID, which searches for sets of agnostic generative models compatible with observations, and exploits these models as testable underlying mechanisms and causal first principles (98, 99), regardless of those being stochastic, computable, or mixed processes. In addition to offering less constrained methods, for example, deconvolution algorithms (60) and optimisation in non-differential spaces (44), this approach offers results in direction to tackling the abduction and counterfactual problem, as for example shown in new methods for open- ended evolutionary computation (102, 103), and synergistic distributed computation (104, 105). However, bottom-up approaches like AID may not be humanly understandable, or
2307.07522#41
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
42
(102, 103), and synergistic distributed computation (104, 105). However, bottom-up approaches like AID may not be humanly understandable, or when they are, scrutinising them may require great computational effort, as is the case in other areas such as automatic theorem proving (e.g., the four-colour theorem). LLMs may here again provide an advantage to interface between these model spaces as natural language processors integrat- ing otherwise disparate systems translating among different domain databases and knowledge
2307.07522#42
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
43
# bases. # Experimentation and Sensing One key task is to create AI systems for scientific discovery able to conduct experimentation and hypothesis testing independent of human instruction or with little to no human instruction. This is because what is desired to take scientific discovery to the next level is not the programming of algorithms able to conduct experiments, but open-ended algorithms able to set their own goals and experiments guided by previously conducted experiments (their own or from the human 22 literature). To this end, involving the machine embodiment to perform as a physical scientist by combining sensing and action together in the fully automated smaller cycles (which in turn are part of the larger encompassing AI-led closed-loop of scientific discovery) of empirical hypothesis testing, instrument-driven approaches render robotics key to making progress in physical experimentation so that more and more of the physical execution of experiments will be done using robotics (106). This will increase the productivity of science, as robots work cheaper, faster, more accurately, and for longer than humans. Furthermore, if not embodied, the scientific experiment may collapse into a problem of data analysis and inference without the hypothesis, model, and theory testing that requires positive or negative feedback from the empirical side. Thus only a tiny part of the scientific discovery cycle would be tackled.
2307.07522#43
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
44
Neural networks can help physical machines to embed themselves in a physical world for representation purposes, as neural networks have proven useful in representing all sorts of im- ages. Still, innovation in areas of robotics and mechatronics will be required to accommodate the kind of depth and range of scientific experiments, in particular when it comes to accuracy and precision—which should not present a problem—while also helping with the current, very human problem of reproducibility (40). This is expected to have a significant impact on the reproducibility of science, as automating science requires semantic precision. LLMs will also interface between human and robot instructions making it easier to create tools to automate ex- periments in natural language effectively instantiating a robot assistant able to process human instructions for scientific experimentation. # Rejection, Validation and Model Selection Model selection and reduction have been a recurring theme across several sub-fields of areas, such as computational biology and neuroscience, with special reference to dynamical forward models. The idea is that if a complex nonlinear model can be reduced in complexity (fewer 23
2307.07522#44
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
45
23 state variables and parameters), the investigator can more readily discern which parameters and state variables are more crucial to the model’s behaviour, facilitating model analysis and un- derstanding. One example is the reduction of the four-dimensional Hodgkin–Huxley model to a two-dimensional FitzHugh–Nagumo (FHN) system (107). The core idea was to perform a time-scale separation into fast and slow subsystems. This has been used in several model reduc- tion studies, including the cell cycle. Techniques for dimension reduction, feature, and model selection will be helpful at this stage, from statistical approaches such as principal component analysis to more sophisticated ones such as minimal information loss techniques. Another core idea for model selection is that each hypothesis formed will have a predicted probability of being correct, possibly along with the associated cost of the respective experi- ment. This may be the monetary cost of executing the experiment, plus a temporal discount rate to value finding results more quickly. It has been empirically shown that using a Bayesian approach to experiment selection is sound and outperforms experiments chosen manually (21). Current AI has shown the ability to yield valuable insights from noisy or incomplete data,
2307.07522#45
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
46
Current AI has shown the ability to yield valuable insights from noisy or incomplete data, optimise procedure design, and learn notions of structure amongst heterogeneous observations. Neural networks have shown utility in isolating proper signals from noisy datasets spanning disciplines from physics to biology; such capabilities could be critical to establishing scien- tific conclusions as we reach the practical limit of experimental data quality (108, 109). Ap- proaches from optimisation have demonstrated an ability to reduce the expense of experimental campaigns by optimising sampling patterns using, for instance, bandit-style methods to more rapidly design electric batteries or iteratively specify experimental conditions in biology. Struc- ture learning techniques from the graphical model literature could find use in identifying statis- tically meaningful relationships from large amounts of unannotated data (108). 24 # Knowledge Representation and Natural Language Processing
2307.07522#46
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
47
Ingested knowledge may no longer be machine-readable, either rule-based or probabilistic given that LLMs can interface between them but its possible caveats, such as low-level hidden mis- alignments, are difficult to unveil, making difficult traceability and liability. LLMs can allow machines to read, interpret, and exploit the current knowledge from a scientific domain in hu- man natural language and digest the relevant literature in the target area. An AI-led scientific discovery approach will require at least access to the space of interest needed for the system to be able to validate or reject a hypothesis based on contradiction or confirmation of previous knowledge which may be difficult in a black box like an LLM. So, the LLM will need to be self-explanatory with the caveat that the output explanation may not fit the internal statistical derivation of what the LLM ends up producing. An independent system and a more explain- able mechanistic process may need to verify the output. Without LLMs, this task would have required massive databases and curation efforts for domains that are not already significantly represented in a computable fashion. Although all sorts of languages can be used to represent knowledge, some domains will be aptly represented by propositional-logic rules, such
2307.07522#47
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
48
represented in a computable fashion. Although all sorts of languages can be used to represent knowledge, some domains will be aptly represented by propositional-logic rules, such as sim- plified genetic circuits, to avoid these potential misalignments from LLMs or statistical ML in general. Other domains will require more sophisticated representations, either to encompass the greater complexity of an extended domain or to deal with the greater sophistication of, e.g., a
2307.07522#48
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
49
domain such as biomedicine, where system-expert rules with ifs, dos, and whiles are required, # hence the full power of first-order logic and Turing-completeness. For example, knowledge representation systems/ontologies are well developed in biology: The Gene Ontology (GO), nascent Causal Activity Models with the GO, Human Phenotype On- tology, Chemical Entities of Biological Interest, Ontology of Biomedical Investigation, among others (110, 111). So are integration efforts built on these ontologies, e.g., Monarch (112). The JST MIRAI ‘Robotic Biology’ project can also provide technologies to help adoption, such as 25 LabCode, a common formal language for experimental protocols, LabLive, a laboratory infor- mation IoT platform, and real-time parallel workflow scheduling software that can decompose processes in a given protocol and assign each to different robots/equipment so these are exe- cuted considering dependencies and concurrency between them.
2307.07522#49
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
50
Another example is statistical relational learning (SRL), which combines relational learning and probability theory and is an area of ML research (e.g. (113)), enabling the representation of beliefs about relational data using probabilistic models. Relational Learning (RL) is a general representation language based on first-order predicate logic (113). Such probabilistic logic models enable the specification of graphical models (Bayesian networks, Markov networks, etc.) over large relational domains. One of the fundamental design goals of the representation formalisms developed in SRL is to abstract away from concrete entities and to represent instead general principles that are intended to be universally applicable. A key advantage of RL is that it can easily incorporate background scientific knowledge, and learn about structured objects such as scientific models particularly appropriate for utilising background bioinformatic data (114). These approaches can be further enhanced or complemented by the do-calculus (96, 115) or algorithmic information dynamics (61). Deep neural networks are also good at capturing the apparent granularity and complexity of natural phenomena in a computable form (in weighted vectors of numerical matrices). The success of neural networks implies that once one captures an object in an optimal way, classi- fication is trivial, as it was for deep learning in the protein-folding challenge (75, 83) with its # limitations.
2307.07522#50
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
51
# limitations. Assuming that an appropriate formalism to record observation could be found for any do- main, a modeller may be faced with a severe feature selection problem, which translates into a question of the identity of the relevant state variables of the systems of interest, e.g., drug docking dynamics for drug discovery or cytokines for cell dynamics. On the one hand, all the 26 system entities that are measured could define the set of state variables to be represented, e.g. drugs or proteins, augmented with the set of rules to which the entities may be subjected, such as thermodynamics or collisions. However, this type of representation could quickly become very complex (116). On the other hand, a certain subset of combinations of measured state variables may be a useful representation of the governing dynamics driving a possible system, and this is a question that needs to be asked and resolved for scientific domains on a case- by-case basis. Such a feature selection problem in computably representable objects is often found in analyses in which one is assuming a pure stochastic nature of the system’s genera- tive processes, although the system also comprises deterministic, mechanistic, or computable subprocesses (101). In addition, even in cases the whole algorithmic space of possibilities is covered, analyzing the information content carried by a network highly depends on the multi- dimensional space into which it is embedded (117), where distortions may be exponential for
2307.07522#51
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
52
# multidimensionality-agnostic encodings. Thus, developing expressive and efficient frameworks to computationally represent and cap- ture a wide range of scientific knowledge about processes, models, observations and hypotheses is key. Additionally, in the opposite direction of knowledge representation by machines, the AI for scientific discovery may need to communicate in the form of a publication or other scientific means to explain the innovation and methods behind the discovery to humans and to articulate its significance and impact. Thus, not only we will have to improve knowledge representa- tion (43) of these scientific objects of inquiry, but also include (meta)knowledge representation of the social dynamics constituted by the scientific practice of humans and AI algorithms in the loop. This in turn should lead to better mitigations of the aforementioned problems of repro- ducibility and biases in science. Capturing scientific knowledge will push the limits of the state # of the art. A choice that has to be made, on a case-by-case basis, is whether it is required that AI 27 conducts the experiments without much human understanding or whether it is acceptable not to have a sophisticated translation of both the hypotheses generated and the process arriving at a conclusion. In cases where there is a requirement for human understanding, and even for the most general case, at least partial interpretation by human scientists may be required.
2307.07522#52
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
53
Thus, knowledge representation and natural language processing techniques will be needed to be jointly developed to both: feed the system with the current knowledge relevant to the hypothesis space; and guide the search (in cases of human-machine interaction) or be able to follow up the inference process and interpret the results (118, 119). These requirements will force us to make progress on humanly readable and interpretable machine-human translation. # Integration, Interpretation and Interfacing One of the most challenging aspects of scientific discovery is integrating a new piece of informa- tion with the corpus of existing human knowledge. Analysing the data will require moving to the larger learning loop where there is a broader view of the results for possible (re-)interpretation. This is because while the specific objective for the target hypothesis may have been rejected, one of the main serendipity checkpoints is the reinterpretation of results in a broader context. Machine learning systems have proven incredibly useful for automated knowledge base construction. They have recently contributed to creating multiple large databases describing, for instance, genome-wide association studies and drug-disease interactions directly from the published literature (120). This ability to create massive knowledge bases that rapidly and effec- tively contextualise new findings could substantially accelerate scientific discovery by ensuring that seemingly disparate dots are more rapidly connected.
2307.07522#53
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
54
However, exploring and understanding user context requires automating certain social, po- litical, and economic aspects of interconnected knowledge that are intrinsic to science (37). The AI systems’ interactions with scientists must be guided by a knowledge-rich multi-agent 28 model (106) that enables the AI systems to act as colleagues like LLMs now may allow. This constitutes an inextricable loop in which human scientists and AI-scientists are parts of a whole system, which the AI algorithm should try to optimise. A striking example of such an optimal interplay has been the evolution of machine-human chess collaboration. After the defeat of Gary Kasparov, it became standard to have human chess players practice with computers, and for champions, it became impossible to reach the level of playing demanded without intensive computer training (121). To this day, the strongest freestyle chess teams have been those able to strike a perfect balance between machine and computer training and playing.
2307.07522#54
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
55
Again, neural networks and statistical machine learning will not help in this process, at least not on their own or in their traditional architectures. What is most likely needed here is first an inference engine able to extract knowledge readable by humans as well, especially under human-machine schemes. Classical logical inference engines are key, but so are hybrid ap- proaches combining statistical learning and symbolic computation so that the AI algorithms’ objectives and their respective performance measures are not always fixed in advance (23). Techniques such as feature selection and data dimension reduction will be helpful in this regard. Secondly, an AI algorithm that can simulate the network topological properties of scientific pro- duction (36) and perform the steps of the full cycle of AI-led scientific discovery, while taking into account the relational structures and biases that emerge when the AI-human relationship is analysed as a single system. The application of AI to science will confer multiple advantages, and eliminate some of the disadvantages of having a human in the loop, such as biases and lack of reproducibility. Yet, if humans rely on automated scientific discovery, verifiability and transparency are cru- cial because the coupled AI-human system has to be able to be formally verified to ensure that it matches the goals and that the results match the process. In this manner, the AI algorithm should be designed to continuously reiterate its data gathering from the outputs and behaviours 29
2307.07522#55
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
56
29 of the whole system the AI is part of. The same for the human scientist, which needs to be able to perform, evaluate, and produce analytical reasoning while participating in this coupled computational-social system. This in turn may give rise to innovative methodologies and epis- temological grounds that foster the scientific justification of the results and novelties discovered by such a coupled system. # Closing the Loop Finally, connecting all the steps will require a meta-algorithm that will need to systematically manage each cycle and even decide when to break or restart the cycles (see Fig. 3), if human intervention is taking place. The whole cycle should be open to human intervention, and the AI algorithm should both reiterate the new insights and data given by humans and counter any bias that these may introduce. Technology for remote web control and monitoring of full-cycle scientific discovery may require technologies such as TypeScript, React, GraphQL, Jest, and Redux to create a web- based beamline control system. Techniques such as optimisation and anomaly detection can be used to find possible gaps and even glitches (found or promoted). These gaps can be exploited to reinterpret data, explore other regions of the hypothesis space and kick-start the process of hypothesis generation again, thus closing and restarting the discovery cycle.
2307.07522#56
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
57
Notice that each of the above aspects of the AI-led closed-loop science can be considered landmark projects that will also require solutions to many standard technical problems (122). Therefore, toward being able to close the loop with an AI-led science, the “grand challenge” (122) that we propose ranges over automating not only laboratory practices and theory making, but also writing a paper, refereeing, and disseminating achievements. 30 # Conclusion: the Future of AI in Scientific Discovery Future scientific progress has become almost unthinkable without the involvement of machine learning. We have explored some challenges and opportunities in utilising and exploiting AI. We argue that a closed-loop formulation not only augments and accelerates scientific discovery but also leads science in new directions, thus potentially disrupting the future trajectory of hu- man science. Such closed-loop experimentation led by AI may also mitigate current challenges, such as the production and replication of data. The development of AI to discover new fundamental scientific laws and representations is different compared to the application of AI to games such as chess, shogi, or Go. However, re- cent developments surprisingly suggest that some scientific challenges may not be that different from these games (123–125).
2307.07522#57
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
58
New questions for scientists and policymakers are increasingly pertinent. For example, do we require AI equipped with sufficient intelligence and autonomy to render it capable of sensing and making observations to ask novel scientific questions? Who should control AI4Science systems, humans or other tertiary systems we may trust? How will the role of the future scientist change? Yet, these challenges must be solved since we urgently need to solve problems like cancer and climate change. # References 1. Y. LeCun, Y. Bengio, G. Hinton, Nature 2015 521:7553 521, 436 (2015). 2. J. Schmidhuber, Neural Networks 61, 85 (2015). 3. D. Hassabis, D. Kumaran, C. Summerfield, M. Botvinick, Neuron 95, 245 (2017). 4. A. Zador, et al., Nature Communications 14, 1597 (2023). 31 5. J. Pearl, Causality: Models, Reasoning and Inference (Cambridge University Press, USA, 2009), second edn. 6. M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, P. Vandergheynst, IEEE Signal Process- ing Magazine 34, 18 (2017). 7. G. E. Karniadakis, et al., Nature Reviews Physics 3, 422 (2021).
2307.07522#58
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
59
7. G. E. Karniadakis, et al., Nature Reviews Physics 3, 422 (2021). 8. A. Lavin, et al., arXiv Preprints (2021). arXiv:2112.03235 [cs.AI]. 9. A. Holzinger, A. Saranti, C. Molnar, P. Biecek, W. Samek, AI - Beyond Explainable AI, A. Holzinger, et al., eds. (Springer International Publishing, Cham, 2022), vol. 13200, pp. 13–38. Series Title: Lecture Notes in Computer Science. 10. AI for Science. ai4sciencecommunity. https://ai4sciencecommunity. github.io/. 11. P. Berens, K. Cranmer, N. D. Lawrence, U. von Luxburg, J. Montgomery, arXiv Preprints (2023). arXiv:2303.04217 [cs]. 12. G. Karagiorgi, G. Kasieczka, S. Kravitz, B. Nachman, D. Shih, Nature Reviews Physics 4, 399 (2022). 13. M. Raghu, E. Schmidt, arXiv Preprints (2020). arXiv:2003.11755 [cs, stat].
2307.07522#59
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
60
13. M. Raghu, E. Schmidt, arXiv Preprints (2020). arXiv:2003.11755 [cs, stat]. 14. F. No´e, A. Tkatchenko, K.-R. M¨uller, C. Clementi, Annual Review of Physical Chemistry 71, 361 (2020). 15. B. A. Richards, et al., Nature Neuroscience 22, 1761 (2019). 16. H. Wang, et al., Nature 620, 47 (2023). 32 17. R. Bommasani, et al., arXiv Preprints (2022). arXiv:2108.07258 [cs]. 18. OpenAI, arXiv Preprints (2023). arXiv:2303.08774 [cs]. 19. A. Srivastava, et al., Transactions on Machine Learning Research (2023). 20. A. W. Senior, et al., Nature 577, 706 (2020). 21. R. D. King, et al., Nature 427, 247 (2004). 22. R. D. King, et al., Science 324 (2009). 23. J. McCarthy, Artificial Intelligence 171, 1174 (2007). 24. D. Silver, et al., Science 362, 1140 (2018). 25. J. M. Stokes, et al., Cell 180, 688 (2020).
2307.07522#60
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
61
24. D. Silver, et al., Science 362, 1140 (2018). 25. J. M. Stokes, et al., Cell 180, 688 (2020). 26. B. Alipanahi, A. Delong, M. T. Weirauch, B. J. Frey, Nature Biotechnology 33, 831 (2015). 27. A. W. Senior, et al., Nature 577, 706 (2020). 28. R. D. King, Scientific American 304, 72 (2011). 29. D. Wang, et al., Proceedings of the ACM on Human-Computer Interaction pp. 1–24 (2019). 30. B. A. Nosek, J. R. Spies, M. Motyl, Perspectives on Psychological Science 7, 615 (2012). 31. D. Fanelli, R. Costas, J. Ioannidis, PNAS 14, 3714 (2017). 32. R. Nuzzo, Nature pp. 182–185 (2015). 33 33. S. N. Goodman, D. Fanelli, J. P. Ioannidis, Getting to Good: Research Integrity in the Biomedical Sciences pp. 96–102 (2018). 34. J. K. Harris, et al., Public Health Reports 134, 109 (2019).
2307.07522#61
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
62
34. J. K. Harris, et al., Public Health Reports 134, 109 (2019). 35. P. Kaanders, P. Sepulveda, T. Folke, P. Ortoleva, B. D. Martino, bioRxiv p. 2021.06.29.450332 (2021). 36. W. Dashun, B. Albert-L´aszl´o, The Science of Science (Cambridge University Press, Cam- bridge, UK, 2021). 37. S. Fortunato, et al., Science 359 (2018). 38. Nature, Nature 537, 465 (2016). 39. V. Colizza, A. Flammini, M. A. Serrano, A. Vespignani, Nat. Phys. 2, 110 (2006). 40. M. Baker, Nature 533, 452 (2016). 41. M. Baddeley, EMBO Rep. 16, 902 (2015). 42. D. B. Resnik, K. C. Elliott, Accountability in research 23, 31 (2016). 43. J. A. Evans, J. G. Foster, Science 331 (2011). 44. S. Hern´andez-Orozco, et al., Frontiers in Artificial Intelligence 3, 567356 (2021).
2307.07522#62
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
63
44. S. Hern´andez-Orozco, et al., Frontiers in Artificial Intelligence 3, 567356 (2021). 45. L. Venturi, A. Bandeira, J. Bruna, Journal on Machine Learning Research 20, 1 (2019). 46. Y. Goodfellow, Y. Bengio, A. Courville (MIT Press, 2016). 47. V. Buhrmester, D. M¨unch, M. Arens (2019). 48. C. Rudin, Nature Machine Intelligence 1, 206 (2019). 34 49. R. Salakhutdinov, Annual Review of Statistics and Its Application 2, 361 (2015). 50. A. Creswell, et al., IEEE Signal Process. Mag. 35, 53 (2018). 51. Y. Bian, X.-Q. Xie, J. Mol. Model. 27, 71 (2021). 52. H. Zenil, Entropy 22, 612 (2020). 53. B. Scholkopf, et al., Proceedings of the IEEE 109, 612 (2021). 54. M. J. Colbrook, V. Antun, A. C. Hansen, Proceedings of the National Academy of Sciences 119 (2022). 55. C. S. Calude, G. Longo, Foundations of Science 22, 595 (2017).
2307.07522#63
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
64
55. C. S. Calude, G. Longo, Foundations of Science 22, 595 (2017). 56. G. Smith, SN Applied Sciences 2 (2020). 57. C. Nadeau, Mach. Learn. 52, 239 (2003). 58. J. Spooner, V. Palade, M. Cheah, S. Kanarachos, A. Daneshkhah, Applied Sciences 11, 471 (2021). 59. H. Kitano, AI Magazine 37 (2016). 60. H. Zenil, N. A. Kiani, A. A. Zea, J. Tegn´er, Nature Machine Intelligence 1, 58 (2019). 61. H. Zenil, N. Kiani, F. Abrah˜ao, J. Tegn´er, Scholarpedia Journal 15, 53143 (2020). 62. F. S. Abrah˜ao, H. Zenil, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 380 (2022). 63. X.-B. Jin, R. J. Robert Jeremiah, T.-L. Su, Y.-T. Bai, J.-L. Kong, Sensors (Basel) 21, 2085 (2021). 35
2307.07522#64
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
65
35 64. S. Thieme, Knowledge Representation and Organization in Machine Learning (Springer- Verlag, Berlin/Heidelberg, 2005), pp. 177–191. 65. R. Goebel, et al., Lecture Notes in Computer Science, Lecture notes in computer science (Springer International Publishing, Cham, 2018), pp. 295–303. 66. J. Copeland, Alan Turing: The codebreaker who saved ‘millions of lives’ - BBC News (2012). 67. J. Copeland, D. Proudfoot, Alan Turing, Codebreaker and Computer Pioneer - AlanTur- ing.net The Turing Archive for the History of Computing (2004). 68. L. L. Cavalli-Sforza, Cell 132 (2008). 69. IEEE, IEEE Intelligent Systems 26 (2011). 70. J. I. Seeman, Chemical & Engineering News pp. 10–14 (2013). 71. J. Lederberg, E. A. Feigenbaum, B. G. Buchanan, R. K. Lindsay, Applications of Artificial Intelligence for Organic Chemistry: The DENDRAL Project (McGraw-Hill, 1980). 72. B. G. Buchanan, E. H. Shortliffe, Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project (Addison-Wesley, Reading, MA, 1984).
2307.07522#65
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
66
73. P. W. Langley, H. A. Simon, G. Bradshaw, J. M. Zytkow, Scientific Discovery: Computa- tional Explorations of the Creative Process (MIT Press, Cambridge, Mass, 1987). 74. B. Burger, et al., Nature 583 (2020). 75. J. Jumper, et al., Nature 596, 583 (2021). 76. D. B. Lenat, Machine Learning, R. Michalski, J. Carbonell, Mitchell T.M., eds. (Springer, Berlin, Heidelberg, 1983). 36 77. K. W. Haase, Discovery Systems AI Memo 898, Tech. rep., Artificial Intelligence Labo- ratoy MIT, Cambridge Mass. (1986). 78. DataRobot, DataRobot - AI Cloud - The Next Generation of AI. 79. Eureqa, Eureqa Models — DataRobot. 80. Nutonian, DataRobot AI Cloud Platform. 81. R. Dubˇc´akov´a, Genet. Program. Evolvable Mach. 12, 173 (2011). 82. J. L. Awange, B. Pal´ancz, R. H. Lewis, L. V¨olgyesi, Mathematical Geosciences (Springer International Publishing, Cham, 2018), pp. 321–357.
2307.07522#66
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
67
83. G.-W. Wei, Nature Machine Intelligence 1, 336 (2019). 84. J. Skolnick, M. Gao, H. Zhou, S. Singh, J. Chem. Inf. Model. 61, 4827 (2021). 85. J. Liu, et al., Geophys. Res. Lett. 48 (2021). 86. R. Gupta, et al., Mol. Divers. 25, 1315 (2021). 87. R. Liu, L. Wei, P. Zhang, Nat Mach Intell 3, 68 (2021). 88. R. Bonneau, et al., Cell 131, 1354 (2007). 89. J. R. Karr, et al., Cell 150, 389 (2012). 90. Y. Luo, J. Peng, J. Ma, Nat Mach Intell 2, 426 (2020). 91. F. S. Abrah˜ao, et al., arXiv Preprints (2023). arXiv:2112.12275 [cs.IT]. 92. Y. Gil, M. Greaves, J. Hendler, H. Hirsh, Science 346, 171 (2014). 37 93. K. R. Popper, Objective Knowledge: An Evolutionary Approach (Oxford University Press, New York, 1972).
2307.07522#67
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
68
37 93. K. R. Popper, Objective Knowledge: An Evolutionary Approach (Oxford University Press, New York, 1972). 94. R. D. King, M. Liakata, C. Lu, S. G. Oliver, L. N. Soldatova, Journal of the Royal Society Interface 8, 1440 (2011). 95. Bertrand Russell, The Problems of Philosophy (Home University Library, 1912). 96. J. Pearl, Biometrika 82, 669 (1995). 97. C. G. Morgan, Artificial Intelligence 2, 179 (1971). 98. H. Zenil, et al., SSRN Electron. J. (2018). 99. H. Zenil, et al., iScience pp. 1160—-1172 (2019). 100. D. B. Lenat, Artificial Intelligence 19, 189 (1982). 101. H. Zenil, N. A. Kiani, J. Tegn´er, Physical Review E 96, 012308 (2017). 102. S. Hern´andez-Orozco, F. Hern´andez-Quiroz, H. Zenil, Artificial Life 24, 56 (2018). 103. S. Hern´andez-Orozco, N. A. Kiani, H. Zenil, Royal Society Open Science 5, 180399 (2018).
2307.07522#68
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
69
103. S. Hern´andez-Orozco, N. A. Kiani, H. Zenil, Royal Society Open Science 5, 180399 (2018). 104. F. S. Abrah˜ao, K. Wehmuth, A. Ziviani, Theoretical Computer Science 785, 83 (2019). 105. F. S. Abrah˜ao, K. Wehmuth, A. Ziviani, Complex Systems 27 (2018). 106. H. Kitano, et al., AI Magazine 18, 73 (1997). 107. B. Lindner, L. Schimansky-Geier, Physical Review E 60, 7270 (1999). 108. M. Drton, M. H. Maathuis, Annual Review of Statistics and Its Application 4, 365 (2017). 38 109. S. R. Eddy, Nat. Biotechnol. 22, 1177 (2004). 110. R. Stevens, C. A. Goble, S. Bechhofer, Brief. Bioinform. 1, 398 (2000). 111. J. B. L. Bard, S. Y. Rhee, Nat. Rev. Genet. 5, 213 (2004). 112. K. A. Shefchek, et al., Nucleic Acids Res. 48, D704 (2020).
2307.07522#69
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.07522
70
112. K. A. Shefchek, et al., Nucleic Acids Res. 48, D704 (2020). 113. L. D. Raedt, Logical and Relational Learning (Springer Berlin Heidelberg, Berlin, Hei- delberg, 2008). 114. O. I. Orhobor, N. N. Alexandrov, R. D. King, Machine Learning 2020 109:11 109, 2195 (2020). 115. J. Pearl, Uncertainty in Artificial Intelligence - Proceedings of the 28th Conference, UAI 2012 pp. 4–11 (2012). 116. C. Tang, et al., Neural Netw. 117, 163 (2019). 117. F. S. Abrah˜ao, K. Wehmuth, H. Zenil, A. Ziviani, Entropy 23 (2021). 118. G. G. Chowdhury, Annual Review of Information Science and Technology 37, 51 (2005). 119. E. Cambria, B. White, IEEE Comput. Intell. Mag. 9, 48 (2014). 120. C. Andronis, A. Sharma, V. Virvilis, S. Deftereos, A. Persidis, Brief. Bioinform. 12, 357 (2011).
2307.07522#70
The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence
Recent advances in machine learning and AI, including Generative AI and LLMs, are disrupting technological innovation, product development, and society as a whole. AI's contribution to technology can come from multiple approaches that require access to large training data sets and clear performance evaluation criteria, ranging from pattern recognition and classification to generative models. Yet, AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access. Generative AI, in general, and Large Language Models in particular, may represent an opportunity to augment and accelerate the scientific discovery of fundamental deep science with quantitative models. Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery, including self-driven hypothesis generation and open-ended autonomous exploration of the hypothesis space. Integrating AI-driven automation into the practice of science would mitigate current problems, including the replication of findings, systematic production of data, and ultimately democratisation of the scientific process. Realising these possibilities requires a vision for augmented AI coupled with a diversity of AI approaches able to deal with fundamental aspects of causality analysis and model discovery while enabling unbiased search across the space of putative explanations. These advances hold the promise to unleash AI's potential for searching and discovering the fundamental structure of our world beyond what human scientists have been able to achieve. Such a vision would push the boundaries of new fundamental science rather than automatize current workflows and instead open doors for technological innovation to tackle some of the greatest challenges facing humanity today.
http://arxiv.org/pdf/2307.07522
Hector Zenil, Jesper Tegnér, Felipe S. Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G. Frey, Adrian Weller, Larisa Soldatova, Alan R. Bundy, Nicholas R. Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D. Gregory, Carla P. Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King
cs.AI, cs.LG
35 pages, first draft of the final report from the Alan Turing Institute on AI for Scientific Discovery
null
cs.AI
20230709
20230829
[ { "id": "2003.11755" }, { "id": "2112.03235" }, { "id": "2108.07258" }, { "id": "2112.12275" }, { "id": "2303.08774" }, { "id": "2303.04217" } ]
2307.03875
1
# Abstract Supply chain operations traditionally involve a variety of complex decision making prob- lems. Over the last few decades, supply chains greatly benefited from advances in computa- tion, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in ex- plaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technol- ogy can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide – a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our frame- work does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server place- ment scenario within Microsoft’s cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios. # Introduction
2307.03875#1
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
2
# Introduction Modern supply chains are complex, containing multiple tiers of suppliers, customers, and service providers [1]. Optimization tools have been widely utilized for decision making in such supply chains. These tools not only automate some of the decision making processes, but also result in efficiency gains and substantial cost reductions across many industries [2]. However, some of the automated processes require involving business operators, for understanding and explaining certain decisions, providing what-if analysis, and even overriding some optimization outcomes. In many cases, these operators are not equipped with the necessary background in optimization, resulting in time-consuming back-and-forth interactions with program managers, data scientists and engineers. Large language models (LLMs) have recently emerged as a promising tool for assisting humans with a wide variety of tasks, such as writing documents, presenting work, coding and health diagnosis [3, 4, 5]. Generative multimodal LLMs, such as OpenAI’s GPT-4, are being rapidly integrated within co-pilots, for answering questions and increasing productivity through simple, language based interactions with technology [6]. In this paper, we study how state-of-the-art LLMs can be applied for reasoning about supply chain optimization. Using LLMs in our context is challenging. First, the underlying optimization problems are often large scale combinatorial optimization problems, and solving 1
2307.03875#2
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
3
1 on e & (Cisse RSS alee are ena oe | Type your question here... Figure 1: OptiGuide interface. them directly is currently out of reach for LLMs [4]. Second, one needs to align the large foundation models to answer the domain-specific questions. Due to the large scale, fully training these models is not possible, and even middle-ground solutions such as fine-tuning LLMs require substantial compute and engineering investments [7]. Last but not least, any use of LLMs in business-critical operations, should have solutions when “things go wrong”, including diagnosing of and recovering from mistakes and hallucinations [8].
2307.03875#3
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
4
In view of these challenges, we design and implement OptiGuide – a framework that employs LLMs to interpret supply chain optimization solutions. A key idea behind OptiGuide is not to replace optimization technology by LLMs, but rather use optimization solvers in tandem with LLMs. In our design (see Figure 2 for system architecture), the LLM is responsible for translating the human query to “optimization code”, which is in turn used by an optimization solver to produce the necessary output; the output then passes through the LLM for producing the answer in human language (English). This architecture is used both for textual explanations and visualizations of the optimization solution, as well as for answering what-if queries. To address what-if queries, OptiGuide uses the LLM to appropriately modify the input to the optimization solver, and then reruns the solver under the hood to produce an answer.
2307.03875#4
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
5
To enable OptiGuide, we solve multiple technical challenges. First, we circumvent all forms of costly training, by applying in-context learning, namely “teaching” the LLM about the domain directly through the query’s prompt (i.e., as part of the inference). This requires careful co-design of the optimization code and the prompt with the understanding that the prompt can be space constrained. For example, we write the code in certain functional form that can be efficiently mapped to questions asked by humans. We also design a simple safeguard mechanism that confronts output mistakes. To evaluate the effectiveness of OptiGuide, we introduce an evaluation benchmark that includes (i) a variety of common supply chain scenarios, and (ii) an evaluation methodology that incorporates new metrics for quantifying accuracy, generalizability within a scenario, and extrapolation capability to unseen scenarios. We test OptiGuide on five different scenarios and 2 Final answer App Specific Components Solver | Helper : (4) Input! <— Database | Document : paerererrererss a ror 6) Output Logs : Hi Interpreter i 6) Result Agents i @ Answer Figure 2: The OptiGuide framework.
2307.03875#5
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
6
Figure 2: The OptiGuide framework. obtain 93% accuracy on average using GPT-4. We view the benchmark and methodology as contributions that stand on their own, and can be used to evaluate future approaches. We are in the process of open-sourcing our benchmark. Finally, we deploy OptiGuide for the server deployment optimization used in Microsoft Azure’s supply chain. We discuss some of the engineering challenges, and report initial promising results from our evaluation. We believe that this paper sets important foundations, which can be used by other organiza- tions for explaining optimization outcomes through LLMs. There are several future directions that emerge from our study, for example, using smaller models that can be trained with modest resources. As a longer-term goal, it is natural to expand the scope of LLMs beyond explainabil- ity, to facilitate interactive optimization (e.g., “please provide a more load-balanced solution”, “please use at most two suppliers”). With the constant advances of LLM technology, it will be fascinating to examine whether LLMs can be utilized not only as translators, but also for refining and improving optimization outcomes.
2307.03875#6
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
7
The rest of the paper is organized as follows. In Section 2, we provide the necessary back- ground on supply chain optimization and current LLM technology. In Section 3, we describe the design of OptiGuide. Section 4 describes our evaluation benchmark, and OptiGuide’s evaluation results. In Section 5, we outline our findings from OptiGuide’s deployment in Azure’s supply chain. We discuss future perspectives in Section 6. # 2 Background and Motivation In this section, we provide brief background on decision making in supply chain operations, and elaborate on the notion of explainability. We then describe current capabilities and limitations of LLMs, and conclude with a simple supply chain example, which will be useful for explaining our solution approach. 3 # 2.1 Decision Making in Supply Chains A supply chain may be defined as “an integrated network of facilities and transportation options for the supply, manufacture, storage, and distribution of materials and products” [9]. A simple supply chain may consist of a company (e.g., a service provider) and the set of its suppliers and customers [1]. However, most supply chains nowadays contain multiple tiers with suppliers of suppliers, customers of customers, and hierarchies of service providers [1]. This results in highly complex global networks where decisions must be optimized across multiple layers to satisfy customer demand while guaranteeing operational efficiency.
2307.03875#7
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
8
Decision making in supply chains spans different time-scales: starting from the design of the supply chain network (e.g., location of factories), planning (e.g., procurement of supply), and execution (e.g., transportation of goods). This leads to many types of decisions; a few examples: How many factories should we open, where, and with what manufacturing capacity? • What suppliers should we use? • How much inventory should we keep in stock and at which locations? • How should we transport intermediate and finished goods efficiently? The complexity of the decision-making often requires the design of optimization approaches that can incorporate a multitude of constraints and objectives, and still generate good quality solutions in plausible running times. To this end, different aspects of the supply chain (facility location, inventory planning, routing) may be optimized separately or considered jointly (e.g., inventory planning integrated with routing [10]). Common solution approaches for these op- timization problems include Mixed Integer Programming based techniques and heuristics that can tackle the large scale of the problem. # 2.2 Explainability
2307.03875#8
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
9
# 2.2 Explainability Business operators and planners involved in decision-making need to maintain a good under- standing of the optimization outcomes. This allows them to not only address customer questions, but also react to unexpected events, and resolve inefficiencies and bottlenecks. However, the understanding is often challenging due to the complexity of the decision process (e.g., large scale, solution obtained by “black-box” algorithm, etc.) and lack of optimization expertise. For concreteness, we provide below some examples of questions that operators may wish to answer. Q1 What is the cost breakdown for each fulfilled demand? Q2 How much excess inventory have I had per month in the past year? Q3 What would happen if the demand at a particular location increased by 10%? Q4 Can I reduce a factory’s manufacturing capacity by 5% and still meet the demand? Q5 Why was a particular supplier selected for a demand? Q6 How would selecting a different transportation option affect the delivery timelines and the overall cost? These and other questions aim at explaining the outcome of supply chain decisions. They include analyzing the current solution (input and output), investigating historical trends, and exploring what-if scenarios. 4
2307.03875#9
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
10
These and other questions aim at explaining the outcome of supply chain decisions. They include analyzing the current solution (input and output), investigating historical trends, and exploring what-if scenarios. 4 Obtaining insights on optimization decisions may require involving multiple professionals with different roles. Suppose that planners may wish to understand why a demand has not been fulfilled on time. They often surface the concern to the program managers, who involve domain experts, such as data scientists or the engineers that developed the optimization system. The domain experts in turn may need to write additional code and often rerun the optimization to extract the relevant insights. This overall process might be very time-consuming for all parties involved and can cause significant delays in the decision making process. In some applications, teams maintain some custom tools that allow decision makers to reason about certain decisions. For example, application dashboards can provide visualizations or even allow enforcing some actions (e.g., fix a specific supplier for a demand). However, given the engineering overhead of maintaining the tools, they are typically limited to the most common use cases.
2307.03875#10
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
11
The notion of explainability is certainly not novel, and has drawn attention in both academia and industry. There have been numerous studies on explaining ML/AI [11, 12]. In the opti- mization context, IBM Decision Optimization [13] provides answers to a fixed set of queries that the user may choose to activate. See also [14] and references therein. # 2.3 Large Language Models Overview. A large language model (LLM) is a foundation model [15] trained on extensive text data using deep learning techniques, such as Transformer neural networks; ELMo [16], BERT [17], Turing NLG [18, 19], GPT-3 [20], GPT-4 [3], PaLM [21], PaLM-E [22], LLaMA [23], and Vicuna [24] are some examples of widely used LLMs. In the training phase, a LLM learns statistical patterns, word relationships, and contextual information from diverse sources, such as books, articles, websites, and code repositories. LLMs are used for a variety of tasks in the inference phase [4], including chatbots, translation, writing assistance, coding [25, 26, 27], planning [28], poem and story composition.
2307.03875#11
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
12
Using LLMs in applications. Multiple strategies can be employed to adapt LLMs for a specific application. The most common approaches are fine-tuning and in-context learning. Fine-tuning is a classic approach for “transfer learning” aimed at transferring knowledge from a pre-trained LLM to a model tailored for a specific application [29]. Typically, this process involves tweaking some weights of the LLM. While fine-tuning approaches can be made efficient [30, 31], they still necessitate model hosting in GPUs. This requirement can prove excessively costly for many applications. In-context learning [32] is an alternative cheaper approach, which involves incorporating a few training examples into the prompt (or query). The idea here is to append the prompt with domain-specific examples and have the LLM learn from these “few- shot” examples. A key advantage of this approach is that it does not require model parameter updates.
2307.03875#12
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
13
Prompt engineering. In a production setting, developers often send prompts (aka, queries) to the model, which can be appended with domain-specific examples for obtaining higher-quality answers. A collection of prompt management tools, such as ChatGPT Plugin [33], GPT function API call [34], LangChain [35], AutoGPT [36], and BabyAGI [37], have been designed to help engineers integrate LLMs in applications and services. The prompt size is measured in the number of tokens, which is proportional to the query size. LLMs can only process a limited number of tokens because of resource limitations, which is a strict constraint that developers and tools need to find workarounds for. 5 (a) Problem setup. (b) Optimal plan (units). Figure 3: A simple supply chain example: coffee roasting company. Privacy. Using domain-specific information in the prompt may involve proprietary data, which users may prefer not to reveal to LLM hosts. Even if LLM providers offer service level agreements (SLAs) for privacy, passive eavesdropping attackers might still intercept the data. Therefore, many organizations would prefer utilizing LLMs in a privacy-preserving way, namely keeping the proprietary data in-house.
2307.03875#13
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
14
Mistakes. Naturally, LLMs might provide sub-optimal outcomes, such as inaccuracies and even hallucinations [38]. There are generic tools that tackle this problem [39, 40, 41], however one may need domain specific tools for better outcomes. One example is fixing code generated by LLMs [42, 43, 44, 45]. # 2.4 A Simple Example We now describe a simple supply chain example that will be useful for illustrating our approach. The supply chain. Consider a coffee roasting company that roasts two types of coffee (light and dark roast). The company sources coffee beans from three different suppliers, it roasts them in one of its two roasting facilities, and then ships them to one of its three retail locations for selling to customers. The goal is to fulfill the demand in each retail location, while minimizing the total cost. The total cost consists of the cost of purchasing the coffee from the suppliers, the roasting cost in each facility, and the shipping cost of the end product to the retail locations. An illustration is given in Figure 3.
2307.03875#14
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
15
Model formulation. We can model this problem as a Mixed Integer Program. Let x, denote the number of units purchased from supplier s for roasting facility r, and Ure and Ure the amount of light and dark roast sent to retail location @ from roasting facility r. Each supplier s has a capacity C,, and each retail location @ has demand DE and DP for light and dark roast respectively. There is a cost cs, for each unit purchased from supplier s for roasting facility r, a shipping cost of g,¢ for each unit sent to retail location ¢ from roasting facility r, and a roasting cost he and hP per unit of light roast and dark roast respectively in facility r. The 6 optimization problem is the following: minimize (~ Usp * Csr + > Yep : het re Sir Sou bP + SO ube + uh) - Gr) (Objective) re re subject to So asp <C,z Vs (Supplier capacity constraint) Tr > Lsr = Sore + yP,) Yr (Conservation of flow constraint) s £ > yk, > DP Vé (Light coffee demand constraint) Tr > y?, > DP Vvé (Dark coffee demand constraint) Tr sr, Yee. Yee ezt Vs,r,@ (Integrality constraint)
2307.03875#15
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
16
Explainability. Let us now zoom into the example from Figure 3. The optimal solution is depicted in Figure 3b. We see that in the optimal plan, both roasteries produce light and dark coffee; the first roastery sources its beans from supplier 3, while the second from suppliers 1 and 2. The first two retail locations then obtain all their coffee from the first roastery, while the third retail location is supplied by both roasteries. A user may ask the following questions: Q1 What would happen if the demand at retail location 1 increased by 10%? Q2 What would happen if the demands at all retail locations doubled? Q3 Why are we using supplier 3 for roasting facility 1? Q4 Can I use roasting facility 1 only for retail location 2? Q5 What if supplier 3 can now provide only half of the quantity? Q6 The per-unit cost from supplier 3 to roasting facility 1 is now $5. How does that affect the total cost? Q7 Why does Roastery 1 produce more light coffee than Roastery 2? Q8 Why does supplier 1 ship more to Roastery 2 than Roastery 1? Q9 Why not only use one supplier for Roastery 2? # 3 The LLM Framework
2307.03875#16
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
17
Q8 Why does supplier 1 ship more to Roastery 2 than Roastery 1? Q9 Why not only use one supplier for Roastery 2? # 3 The LLM Framework Large-scale supply chain management entails multiple functions, such as extensive data gather- ing, data processing and analysis, optimization processes and communication and enforcement of decisions across multiple stakeholders. While LLMs and supporting tools may handle part of these functions, there is a need for an end-to-end framework that will address the underly- ing challenges in a systematic way. In this section, we describe the design of our framework, OptiGuide. 7 # 3.1 System Overview
2307.03875#17
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
18
7 # 3.1 System Overview The OptiGuide framework, depicted in Figure 2, consists of three sets of entities: agents, LLMs, and application-specific components. When a user poses a question ( 1 ), the coder takes the question and formulates it as an in-context learning (ICL) question ( 2 ) for the LLM. The LLM then generates code ( 3 ) to answer the question. The safeguard checks the validity of the code and aborts the operation in case of a mistake; otherwise the safeguard feeds the code to an application specific component ( 4 ), such as a database engine or an optimization solver (depending on the query). The component processes the code and produces results, which are logged in a file ( 5 ). We note that obtaining the final result may involve multiple iterations ( 2 to 5 ) where the query is automatically refined until the desired output is achieved. Finally, the output logs from the component are fed back into the LLM ( 6 ). The LLM analyzes the logs and generates a human-readable answer ( 7 ) that is sent back to the user ( 8 ). We now provide an overview of the different entities and components. More details can be found in Appendix B. # 3.1.1 Agents # Prompt
2307.03875#18
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
19
# 3.1.1 Agents # Prompt Hi, GPT, read the following sample Q&As and documentation of helper code. Sample questions and code answers below: by Cafe 2? : What ppen i much light coffee as roastery 1? m.addConstr (y_li 'r eryl',c] for c in caf - < ery2',c] for c in ca Question: What if we prohibit shipping from supplier 1 to roastery 2? Show me the new plan and compare with the previous result Code: Figure 4: Coder prompt for the running example Agents facilitate the interaction between users, the LLM, and application-specific compo- nents. The coder converts raw user questions into specific ICL queries. The conversion includes supplying the application context, providing ample training examples, and restructuring the user’s query, as exemplified in Figure 4. The safeguard operates as a quality control checkpoint. It scrutinizes the code for potential discrepancies and initiates self-debugging upon encounter- ing failures. When OptiGuide cannot successfully address a query, the safeguard would either initiate a new iteration with a proposed fix, or generate an error message for the user. The interpreter takes the output logs, tables, graphs, etc., and generates a human friendly response to the user’s query. 8
2307.03875#19
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
20
8 # 3.1.2 Application Specific Components Different applications may have different types of components; we provide an overview of the most common ones. OptiGuide is designed in a modular way, so that using OptiGuide for a different application requires only switching to a new set of components. The database is a systematically arranged collection of data in various formats, such as CSV, SQL, JSON, Parquet, which are queried to extract answers. The solver can be a commercial integer programming solver, such as Gurobi. OptiGuide can query the solver output directly, or the output can be stored and queried from the database. If a question demands profound domain knowledge or historical context, OptiGuide consults documents to enhance the depth and relevance of the response. The helper is an optional component. It consists of a set of functions written by application engineers, for simplifying the code produced by LLMs. For example, a complex data analysis workflow can be simplified to a single helper function call. Someone asked “What if we prohibit shipping from supplier 1 to roastery 2? Show me the new plan and compare with the previous result”. Here are some results | found from coding. 5 we Terminal Outputs are Above --------- — | Please use simple English and explain the answer in HTML format. Output # Prompt
2307.03875#20
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
21
# Prompt Figure 5: Interpreter prompt for the running example # 3.2 A Running Example We illustrate OptiGuide’s data flow via the user question, “What if we prohibit shipping from supplier 1 to roastery 2? Show me the new plan and compare with the previous result”. First, the coder converts this question into an in-context learning query for the LLM, see Figure 4 for the prompt. In addition to the question itself, the prompt contains (i) training examples, namely pairs of questions and code answers, and (ii) a documentation of the helper functions. Intuitively, (ii) supplements (i) by providing additional context into what the code does. Subsequently, the LLM generates code that adds a new constraint (green region in Figure 4). The safeguard then extracts the code from the LLM’s response, and OptiGuide calls the optimization solver to resolve the planning problem, yielding a result depicted in the yellow region in Figure 5. This result is then fed into the LLM by the interpreter, which produces a 9
2307.03875#21
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
22
9 response. Finally, OptiGuide presents the response to the user alongside a visualization of the plan (green region in Figure 5) and a comparison with the original cost. Note that OptiGuide preserves privacy, since the domain-specific data remains in either the solver or database, and is never transferred to the LLM. Additional examples are provided in Figure 6. ‘We need to write Python code to test the what-if scneario. i for c in cafes: , 1 1 | | if c != "cafe2": m.addConstr ( j y_light ["roastery1", c] ==0, "") ; 1 m.addConstr ( ; ' y_dark["roastery1", c] ==0, "") ' ‘ ‘The result has m.objVal = $2570, which is higher than ‘$2470 from the previous solution. We can use SQL command to find out the information. , q i ' SELECT contact FROM suppliers WHERE id="supplier1"; ; ' 1 ‘ Safeguard Alert: sensitive information. Approval required! , ‘ Ticket submitted. } Figure 6: An illustration of questions answered by OptiGuide. The gray dashed boxes resent thoughts that occur in the backend. Users can configure OptiGuide to display these thoughts or not. rep# 4 Evaluation Benchmark
2307.03875#22
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
23
In this section, we develop a benchmark for evaluating the performance of our framework on a variety of supply chain optimization problems. The benchmark and the methodology around it can guide future efforts for using LLMs in supply chain optimization. 10 # 4.1 Scenarios and Data To evaluate our framework, we selected a variety of optimization problems that capture multiple types of decisions that may be relevant in different supply chain settings. Specifically, our dataset includes a facility location scenario, a multi-commodity network flow for distribution of products, workforce assignment optimization, the traveling salesman problem, as well as the coffee distribution scenario from Section 2.4. The code for all problems is in Python and the Gurobi optimization solver [46] is used to obtain the optimal solution; Appendix C provides the code for the coffee distribution problem as an example. Our next step is to generate a repository of questions and code answers for each scenario. Some of these question-answer pairs will be used as examples for in-context learning, while others for evaluating OptiGuide’s performance. To create a large set of questions, we write macros for each question, which results in generating question sets of closely related question-answer pairs. An example of a macro for a question set is the following:
2307.03875#23
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
24
QUESTION: What if we prohibit shipping from {{VALUE-X}} to {{VALUE-Y}}? VALUE-X: random.choice(suppliers) VALUE-Y: random.choice(roasteries) GROUND-TRUTH: model.addConstr(x[{{VALUE-X}}, {{VALUE-Y}}] == 0) In order to increase the diversity in the question sets, we also ask GPT to rephrase the questions while preserving their meaning. For instance, GPT might rephrase the generated question “Why would we ship beans from Supplier 1 to Roastery 2” to “What benefits are associated with the choice of shipping beans from Supplier 1 to Roastery 2?”. We note that the question sets for all problems that are used in the benchmark were created from scratch and kept in house, so that the LLMs have not observed these data as part of their training. # 4.2 Evaluation Methodology The goal of our evaluation is to assess the accuracy of LLMs in answering user questions for supply chain optimization problems. Unfortunately, existing metrics, such as pass@k which is used for analyzing coding accuracy [27, 47], are not well suited for explainability through code (intuitively, the metrics are “too forgiving”). We therefore propose a different methodology which is inspired by the unit-test approach used in software development.
2307.03875#24
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
25
Our evaluation proceeds as follows. For each scenario we run R experiments. Each exper- iment consists of T question sets. Each question set consists of Q test questions and answers. The LLM is asked to write the code and answer for a test question; it is given three chances to produce a response in case of an evident error (runtime or syntax). We then evaluate the correctness of the final answer. Note that we do not necessarily evaluate whether the generated code matches exactly with our ground-truth code, as there are different ways to obtain the correct response. The following example demonstrates a scenario where the generated code is quite different, but the optimization outcome would be the same. 1. model.addConstr(x[’supplier1’, ’roastery2’] == 0, ’force not ship’) 2. shipping_cost_from_supplier_to_roastery[(’supplier1’, ’roastery2’)] = 1e10 Accuracy. We define the accuracy metric AC as the average success rate across all scenarios, experiments and question sets. Formally, Ss 1 Req AC = — — Sa), Cc ROD os (a), 11
2307.03875#25
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
26
Ss 1 Req AC = — — Sa), Cc ROD os (a), 11 Should we use $1 for D1 What can happen if S1 01 Test Question For D1, how about using $1? Question Set t Is $1 usable for D2? Quest Is it appropriate to use S1 for D2? Code: n Could s ALUE_Y}}]==1) Examples (select for Quest Code: n Can D9 benefit from th: Generate 30 distinct questions from Macro Rewrite the question with LLM. (No repetition) (Code answer remains the same) —— —— VALUE-Y: # rand Code: m.addConstr(x{{{VALUE ALUE_Y}}]==1) Figure 7: In-distribution evaluation where qt is the question set, and 1(qt) is the indicator whether it passed successfully. The LLM passes a question set if and only if it successfully answers all questions in the question set.
2307.03875#26
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
27
In-distribution and out-of-distribution evaluation. As common practice, we evaluate our framework in both ‘in-distribution’ and ‘out-of-distribution’ [48] settings. For in-distribution evaluation (Figure 7), the test question and the examples used in the prompt are from the same question set. In contrast, for out-of-distribution evaluation (Figure 8), the example questions are extracted from different question sets. Example selection. As the number of tokens that can be provided as input to the LLMs is limited, we explore different approaches for selecting the training examples for each query. The approaches can be evaluated both for in-distribution and out-of-distribution evaluation. One approach is random selection, where a fixed number of example questions is selected uniformly at random. Another approach is based on nearest neighbors, where we select examples that are similar to the test question; similarity is based on the text embedding [49] of the questions as determined by the model text-embedding-ada-002 [20]. We also experiment with different sizes of the example set (0, 1, 3, 5, or 10 examples). # 4.3 Performance
2307.03875#27
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
28
# 4.3 Performance Setup. For each scenario s, we run R = 10 experiments. In each experiment we evaluate Ts ≥ 10 question sets. Each question set qt usually contains 10 − 30 questions and answers. We use both text-davinci-003 [20] and GPT-4 [3] for our evaluation. Performance results across different LLMs, example selection approaches, and example set sizes are summarized in Table 1. Observations. GPT-4 consistently outperforms text-davinci-003 in both in-distribution and out-of-distribution evaluation. As expected, both models show higher accuracy on in-distribution 12 # Gir # the prompt) Set1 t Set 2 : Question —— Generate and Rewrite Examples (select for the prompt) Figure 8: Out-of-distribution evaluation Table 1: Accuracy across different LLMs, example selection approaches, and example set sizes. Each experiment was run 10 times and the average accuracy is reported.
2307.03875#28
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
29
Table 1: Accuracy across different LLMs, example selection approaches, and example set sizes. Each experiment was run 10 times and the average accuracy is reported. # Examples Model Out-of-distribution Random Nearest Random Nearest In-distribution 0 text-davinci-003 GPT-4 0.32 0.59 1 text-davinci-003 GPT-4 0.78 0.85 0.78 0.90 0.39 0.66 0.44 0.66 3 text-davinci-003 GPT-4 0.90 0.90 0.92 0.92 0.49 0.74 0.44 0.69 5 text-davinci-003 GPT-4 0.93 0.92 0.93 0.93 0.52 0.78 0.48 0.73 10 text-davinci-003 GPT-4 0.92 0.93 0.93 0.93 0.67 0.84 0.61 0.80 13
2307.03875#29
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
30
13 compared to out-of-distribution evaluation. GPT-4 performs relatively much better in out-of- distribution evaluation, demonstrating its stronger reasoning and generalization capabilities; another sign for these capabilities is the 59% accuracy even without any training examples. Increasing the number of examples results in improved accuracy across the board. We also note that the gap between text-davinci-003 and GPT-4 decreases with the size of the example set. The nearest neighbor selection approach yields slight performance improvements for in- distribution evaluation. Interestingly, when the size of the example set is greater than one, random selection outperforms nearest neighbor for out-of-distribution evaluation. One explana- tion here is that selecting examples based on text similarity results in overfitting, and random selection results in more diverse training examples. # 5 OptiGuide for Azure’s Supply Chain In this section, we demonstrate OptiGuide’s capabilities on the server fulfillment supply chain of Microsoft Azure. We start with providing the necessary details for the decisions involved in Azure’s supply chain. We then outline the steps for deploying OptiGuide in production, and provide examples of user interactions and early feedback we obtained. We conclude this section by describing preliminary performance results. # 5.1 The Azure Supply Chain
2307.03875#30
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
31
# 5.1 The Azure Supply Chain The rapid growth of the cloud industry requires cloud providers to continuously deploy addi- tional capacity to keep up with the demand. This is achieved by acquiring new clusters of servers and deploying them in the data centers. The Microsoft Azure supply chain encompasses a broad array of processes including demand forecasting, strategic foresight, hardware semantic search, fulfillment planning, and document management. Due to complexity and large scale, the optimization of Azure’s supply chain is assigned to different subsystems. We focus here on one such subsystem called Intelligent Fulfillment System (IFS), which deals with assigning and shipping servers from the warehouse to the data centers.
2307.03875#31
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
32
Main decisions. For each demand for cloud capacity, the main decisions consist of (i) the hardware supplier that will be used to fulfill the demand, (ii) the timeline of the deployment - in particular, the cluster’s dock-date (which determines the date of shipping from the warehouse), and (iii) the cluster’s deployment location in the data center (selection of a row of tiles to place the cluster on). The goal is to minimize the total cost that consists of multiple components, such as delay/idle cost of the clusters compared to their ideal dock-date and shipping costs, while re- specting a multitude of constraints. Examples of constraints include capacity constraints on the suppliers and the data centers, location preferences for demands and compatibility constraints. The underlying optimization problem is formulated as a Mixed Integer Program (MIP) where the total input data size is around 500 MB. The optimal solution is obtained hourly using Gurobi. More details about the optimization problem can be found in Appendix A.
2307.03875#32
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
33
Stakeholders. The main consumers of IFS are planners. These are professionals that have the business context, so when they receive the outcome of the optimization, they can confirm that it meets business needs (or override decisions otherwise) and ensure the execution of the decisions is completed as planned. However, the increased complexity of the underlying optimization problem in combination with the global scale of decision making (hundreds of data centers) prevents immediate clarity in the reasoning behind each decision. Consequently, planners often reach out to the engineers (including data scientists) that develop the optimization system 14
2307.03875#33
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
34
14 How much excess supply do we have that's unallocated? We have a total of 157 units of unallocated supply. Here are some examples: Processor Generation Subrole Quantity type A jgen 1 |p-1 2 ltypeA gen 1 2 8 type A |gen 2 |p-3 6 ltypeA Igen 3 2 8 type A gen 3 p-3 21 Some data is omitted. Download the whole result for a complete answer. Plot doct dates for all demands in Japan 19 Hl On Ideal Date 4 mm Dock Early 714 mam Dock Late Demand ID N N 7 Dock Date What if we dock demand D13 on its ideal dock date? Demand D413 cannot be met on its ideal dock date because there is no feasible solution that satisfies the requirements. Can we dock demand D62 earlier? The new cost to dock demand D62 earlier is $9100.0, which is $1000.0 more expensive than the original cost of $8100.0. How many rows of Type X are available in the USA? Here's a summary: Zone Number of Rows (Type X) [US Zone 1 2 [US Zone 2 (6 [US Zone 3 (14 [US Zone 4 (13 ‘Type your message here. {Sond |
2307.03875#34
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
35
Processor Generation Subrole Quantity type A jgen 1 |p-1 2 ltypeA gen 1 2 8 type A |gen 2 |p-3 6 ltypeA Igen 3 2 8 type A gen 3 p-3 21 Figure 9: Screenshot of OptiGuide in Microsoft Azure production. We anonymized names and data by using generic values. 15 for obtaining additional insights. Oftentimes, planners and engineers have multiple rounds of interaction around understanding an issue or exploring what-if scenarios. Common questions. We summarize below the main types of questions that are raised by planners: Q1 [Management] Does the system support a particular region, resource, or supplier? Q2 [Availability] Is a resource available or allocated? Q3 [Decisions] Why did the system make decision ‘x’ related to supplier/demand selection, time, and location? Q4 [Details of shipments] What are the details related to cross-geographical shipments and expected dock counts on a specific date? Q5 [Historical data analysis] What is the standard deviation of the supplier’s inventory in the last month? Q6 [Visualization] Can you visualize the dock capacity, availability, dates, or delays at a given location? # 5.2 Deploying OptiGuide for Azure Supply Chain
2307.03875#35
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
36
Q6 [Visualization] Can you visualize the dock capacity, availability, dates, or delays at a given location? # 5.2 Deploying OptiGuide for Azure Supply Chain Our current deployment of OptiGuide consists of (i) a front-end service for multiple-user inter- action; (ii) an agent service, which is connected to Azure OpenAI for LLM access; (iii) multiple virtual machines (VMs) which host IFS and the application specific components to support multiple users at the same time. We preload VMs’ memories with the input data and solver’s solutions to speedup code executions for users. The input data for the optimization problem are updated periodically (hourly), where the VMs load the updated data in a round-robin fashion so that there are always some VMs available to support users. We use GPT-4 as the LLM. # 5.3 Preliminary Feedback and Results Figure 9 provides examples of interactions between users and OptiGuide.
2307.03875#36
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
37
# 5.3 Preliminary Feedback and Results Figure 9 provides examples of interactions between users and OptiGuide. The preliminary feedback we obtained from both planners and engineers has been positive. Users expressed excitement noting the potential of OptiGuide to help them understand the underlying optimization logic. Users especially emphasized the benefits of supporting key what- if scenarios, which gives planners more autonomy and may substantially reduce the engineering on-call burden. For example, before OptiGuide, answering one what-if question would need more than three operators to coordinate the investigation and one on-call engineer to inspect the plan output. Our preliminary evaluation indicates that OptiGuide can achieve more than 90% accuracy for our in-distribution evaluation. This result is consistent with the ones obtained in Section 4. # 6 Concluding Remarks We conclude this paper by discussing current limitations, and highlighting intriguing directions for future work. 16 # 6.1 Current Limitations Users need to be specific. The user needs to ask precise questions. For instance, “Can we dock demand xc132 fifteen days earlier?” is ambiguous, because “earlier” can mean “15 days before today”, “15 days before the currently planned date”, or “15 days before the deadline”. Consequently, the LLM might misunderstand the user and yield the wrong code.
2307.03875#37
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
38
Dependency on application-specific components. OptiGuide relies on proper design of application-specific components, such as the schema of the database and the helper functions. Some of these components might require non-negligible engineering efforts. While there has been progress in automating some of these components [50], there are still gaps in using them in some production settings. Undetected mistakes. We observed cases where the LLM writes code that runs smoothly, but it may be totally wrong (e.g., due to string matching mistakes). We expect that things will improve in the future with more advances in LLMs and supporting tools. Generalize to new questions. While the LLM performs well on seen questions, it still struggles when presented with questions that do not appear in the examples (see, e.g., Table 1). We believe that future models will have better generalizability. Benchmark. Our current evaluation quantifies performance only for quantitative questions; for example, we exclude visualization queries from our analysis. Furthermore, the evaluation is based on a specific programming language (Python) and optimization solver (Gurobi). # 6.2 Future Directions
2307.03875#38
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
39
# 6.2 Future Directions We see our work as a cornerstone for future research in the area. One interesting direction is incorporating human feedback (e.g., from supply chain planners) which could lead to significant performance improvements [51]. Another direction that we are currently examining is using smaller models (see, e.g., [52] and references therein) for the specific tasks of supply chain optimization; using such models allows for more affordable hosting and fine-tuning of the model. In particular, we are examining whether fine-tuning can help with interpreting unseen questions. On a related note, it is of interest to consider a hybrid framework that combines the strengths of different AI models, for example combining large LMs with smaller ones. A natural longer- term goal is to go beyond explainability and facilitate interactive optimization, where the user directly influences the optimization outcomes; this will require designing more comprehensive safeguards, to prevent costly mistakes. # Acknowledgements
2307.03875#39
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
40
# Acknowledgements We thank S´ebastien Bubeck, Yin Tat Lee, Chi Wang, Erkang Zhu, Leonardo Nunes, Srikanth Kandula, Adam Kalai, Marco Molinaro, Luke Marshall, Patricia Kovaleski, Hugo Barbalho, Tamires Santos, Runlong Zhou, Ashley Llorens, Surajit Chaudhuri, and Johannes Gehrke from Microsoft Research for useful discussions. We also thank Brian Houser, Matthew Meyer, Ryan Murphy, Russell Borja, Yu Ang Zhang, Rojesh Punnath, Naga Krothapalli, Navaneeth Echam- badi, Apoorav Trehan, Jodi Larson, and Cliff Henson from the Microsoft Cloud Supply Chain for their advice and support. 17 # References [1] Michael H Hugos. Essentials of supply chain management. John Wiley & Sons, 2018. [2] Douglas M Lambert and Martha C Cooper. Issues in supply chain management. Industrial marketing management, 29(1):65–83, 2000. [3] OpenAI. Gpt-4 technical report, 2023.
2307.03875#40
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
41
[3] OpenAI. Gpt-4 technical report, 2023. [4] S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. [5] Peter Lee, Sebastien Bubeck, and Joseph Petro. Benefits, limits, and risks of gpt-4 as an ai chatbot for medicine. New England Journal of Medicine, 388(13):1233–1239, 2023. [6] GitHub. Github copilot: Your ai pair programmer, 2023. [7] Lingjiao Chen, Matei Zaharia, and James Zou. Frugalgpt: How to use large language models while reducing cost and improving performance. arXiv preprint arXiv:2305.05176, 2023.
2307.03875#41
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
42
[8] Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, and Yang Liu. Prompt injection attack against llm-integrated applications. arXiv preprint arXiv:2306.05499, 2023. [9] Daniel J Garcia and Fengqi You. Supply chain design and optimization: Challenges and opportunities. Computers & Chemical Engineering, 81:153–170, 2015. [10] Pourya Pourhejazy and Oh Kyoung Kwon. The new generation of operations research methods in supply chain optimization: A review. Sustainability, 8(10):1033, 2016. [11] Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. A survey of the state of explainable ai for natural language processing. arXiv preprint arXiv:2010.00711, 2020. [12] Imran Ahmed, Gwanggil Jeon, and Francesco Piccialli. From artificial intelligence to ex- plainable artificial intelligence in industry 4.0: a survey on what, how, and where. IEEE Transactions on Industrial Informatics, 18(8):5031–5042, 2022.
2307.03875#42
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
43
[13] Stefan Nickel, Claudius Steinhardt, Hans Schlenker, and Wolfgang Burkart. Decision Op- timization with IBM ILOG CPLEX Optimization Studio: A Hands-On Introduction to Modeling with the Optimization Programming Language (OPL). Springer Nature, 2022. [14] Kristijonas ˇCyras, Dimitrios Letsios, Ruth Misener, and Francesca Toni. Argumentation for explainable scheduling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 2752–2759, 2019. [15] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. [16] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Ken- ton Lee, and Luke Zettlemoyer. Deep contextualized word representations, 2018. 18
2307.03875#43
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
44
18 [17] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. [18] Corby Rosset. Turing-nlg: A 17-billion-parameter language model by microsoft. Microsoft Blog, 1(2), 2020. [19] Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhan- dari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990, 2022. [20] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan- guage models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
2307.03875#44
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
45
[21] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Hen- ryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanu- malayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz,
2307.03875#45
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
47
[22] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. [23] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [24] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
2307.03875#47
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]
2307.03875
48
[25] Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999, 2022. [26] Vijayaraghavan Murali, Chandra Maddila, Imad Ahmad, Michael Bolin, Daniel Cheng, Negar Ghorbani, Renuka Fernandez, and Nachiappan Nagappan. Codecompose: A large19 scale industrial deployment of ai-assisted code authoring. arXiv preprint arXiv:2305.12050, 2023. [27] Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, and Percy S Liang. Spoc: Search-based pseudocode to code. Advances in Neural Informa- tion Processing Systems, 32, 2019.
2307.03875#48
Large Language Models for Supply Chain Optimization
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design OptiGuide -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
http://arxiv.org/pdf/2307.03875
Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache
cs.AI, cs.CL, cs.DM, cs.LG
null
null
cs.AI
20230708
20230713
[ { "id": "2302.13971" }, { "id": "2203.05132" }, { "id": "1810.04805" }, { "id": "2112.08674" }, { "id": "2108.07258" }, { "id": "2306.11644" }, { "id": "2204.05999" }, { "id": "2305.12050" }, { "id": "2305.17126" }, { "id": "2304.11477" }, { "id": "2107.03374" }, { "id": "2201.11990" }, { "id": "2303.03378" }, { "id": "2305.14552" }, { "id": "2302.06144" }, { "id": "2306.05499" }, { "id": "2010.00711" }, { "id": "2301.00234" }, { "id": "2303.08896" }, { "id": "2303.06689" }, { "id": "2305.05176" }, { "id": "2212.10400" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2104.08691" }, { "id": "2302.12813" } ]