doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.03875 | 49 | [28] Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. Llm+ p: Empowering large language models with optimal planning proï¬ciency. arXiv preprint arXiv:2304.11477, 2023.
[29] Karl Weiss, Taghi M Khoshgoftaar, and DingDing Wang. A survey of transfer learning. Journal of Big data, 3(1):1â40, 2016.
[30] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-eï¬cient prompt tuning. arXiv preprint arXiv:2104.08691, 2021.
[31] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Eï¬cient ï¬netuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
[32] Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. A survey for in-context learning. arXiv preprint arXiv:2301.00234, 2022. | 2307.03875#49 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 50 | [33] OpenAI. ChatGPT plugins, 2023.
[34] OpenAI. Function calling and other API updates, 2023.
[35] LangChian. Introduction â langchain, 2023.
[36] Auto-GPT: An Autonomous GPT-4 Experiment, June 2023. original-date: 2023-03- 16T09:21:07Z.
[37] BabyAGI. Translations: â BabyAGI, 2023.
[38] Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, and Mark Steedman. Sources of hallucination by large language models on inference tasks. arXiv preprint arXiv:2305.14552, 2023.
Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896, 2023.
[40] Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023. | 2307.03875#50 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 51 | [41] Weiwei Sun, Zhengliang Shi, Shen Gao, Pengjie Ren, Maarten de Rijke, and Zhaochun arXiv preprint Ren. arXiv:2212.10400, 2022. Contrastive learning reduces hallucination in conversations.
[42] Jia Li, Yongmin Li, Ge Li, Zhi Jin, Yiyang Hao, and Xing Hu. Skcoder: A sketch-based approach for automatic code generation. arXiv preprint arXiv:2302.06144, 2023.
20
[43] Xue Jiang, Yihong Dong, Lecheng Wang, Qiwei Shang, and Ge Li. Self-planning code generation with large language model. arXiv preprint arXiv:2303.06689, 2023.
[44] Xin Wang, Yasheng Wang, Yao Wan, Fei Mi, Yitong Li, Pingyi Zhou, Jin Liu, Hao Wu, Xin Jiang, and Qun Liu. Compilable neural code generation with compiler feedback. arXiv preprint arXiv:2203.05132, 2022. | 2307.03875#51 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 52 | [45] Angelica Chen, J´er´emy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R. Bowman, Kyunghyun Cho, and Ethan Perez. Improving code generation by training with natural language feedback, 2023.
[46] Bob Bixby. The gurobi optimizer. Transp. Research Part B, 41(2):159â178, 2007.
[47] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Eval- uating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
[48] Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain gener- alization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
[49] Jeï¬rey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532â1543, 2014. | 2307.03875#52 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 53 | [50] Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023.
[51] Sarah Wiegreï¬e, Jack Hessel, Swabha Swayamdipta, Mark Riedl, and Yejin Choi. Re- arXiv preprint framing human-ai collaboration for generating free-text explanations. arXiv:2112.08674, 2021.
[52] Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauï¬mann, Gustavo de Rosa, Olli Saarikivi, et al. Textbooks are all you need. arXiv preprint arXiv:2306.11644, 2023.
21
# A Intelligent Fulï¬llment System
In this section, we present a partial formulation of the optimization in the Intelligent Fulï¬llment System that assigns and ships servers from the warehouse to the data centers.
# A.1 Main Decisions
We introduce the following variables: | 2307.03875#53 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 54 | # A.1 Main Decisions
We introduce the following variables:
⢠zdt â {0, 1}: equals 1 if demand d docks on day t, and 0 otherwise
⢠udr â {0, 1}: equals 1 if demand d docks on row r, and 0 otherwise
⢠wds â {0, 1}: equals 1 if d is fulï¬lled using supplier s, and 0 otherwise
⢠yd,dc,t â {0, 1}: equals 1 if d docks at datacenter dc on day t, and 0 otherwise.
⢠vd,s,t ⥠0 : whether demand d docks on day t using supplier s or not
# A.2 Constraints
This section describes some of the constraints in the formulation.
Docking day. The docking for each demand takes place on a single day.
zdt ⤠1 âd t
Datacenter dockings. For each demand d, we dock at a datacenter dc on a speciï¬c day t only if the selected row belongs to that datacenter dc and the selected day is that particular day t.
> Yd,de,t S Zat Vd, t dc Ye vasaes = > Udr Vd, de t râ¬rows(dc) | 2307.03875#54 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 55 | > Yd,de,t S Zat Vd, t dc Ye vasaes = > Udr Vd, de t râ¬rows(dc)
Datacentersâ daily capacities. There are restrictions restr on the daily amount of dockings that sets of datacenters can handle. Let Rd denote the number of racks required for demand d.
yd,dc,t · Rd ⤠DockRestrAvailCap(restr, t) ârestr â Restrictions, t d,dcâDC(restr)
Single supplier. Each demand must be fulï¬lled by a single supplier. A row is selected for a demand only if a supplier has been found.
> Wds <1 Vd s Udr < > Wds Vd,r Ss
22
Auxiliary supplier variables. Connecting variables vdst with the rest of the variables.
Zdt = > Vdst Vd,t s Wds = > Vast Vd,t t
Supply availability. We have a set of supply pools with a certain capacity (amount of avail- able supply) evaluated at times ct. We need to make sure that the supply s we consume from each supply pool sp is available at the time t that we consume it. The time where each supply becomes available depends on its lead time. | 2307.03875#55 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 56 | vdst ⤠Available Supply(sp, ct) âsp, ct d,sâsp,tâ¤leadtime(ct,d,s)
Overrides. Some demand-supply combinations might be undesirable or disallowed for some reason. These can be explicitly blocked. Let B denote the set of blocked pairs.
wds = 0 â(d, s) â B
# A.3 Objective
Our goal is to minimize the total cost which is the aggregate of multiple components, including the cost of docking too early or too late compared to the ideal dock-date of each demand, the cost of not fulï¬lling demands, and the shipping cost, among others.
DockCost = > zatâ Demand_Day_DockCost(d, t) dt
NoDockCost = xe - > zat) + Unsatisfied_Cost(d) d t
ShippingCost = wds · Transit Ship Cost(d, s) d,s
# B Engineering Details
Figure 11, at the end of this document, presents a detailed screenshot of OptiGuide with Azure IFS, including intermediate results for illustration purposes.
# B.1 Useful Tricks
SQL : Many LLMs are trained with SQL database. Hence, saving optimization input and output data into SQL could make the system easier to use and more explainable. | 2307.03875#56 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 57 | # B.1 Useful Tricks
SQL : Many LLMs are trained with SQL database. Hence, saving optimization input and output data into SQL could make the system easier to use and more explainable.
Logical simpliï¬cation: logical mistakes (e.g., ânot useâ v.s. âuseâ, before v.s. after, etc.). If the prompt is not designed well, the LLM might make many simple
Intermediate outputs. When dealing with complex prompts, providing intermediate out- puts can help keep the LLM on track. By returning intermediate results or steps, the LLM can check the consistency of its process, making it easier to debug and reï¬ne.
23
# B.2 Failed Attempts
Chain of thought (CoT) failures. Unlike many recent studies [50] that have found that LLMs have strong CoT abilities, we found CoT is not helpful for writing complex code. This is another reason why we integrated the helper functions in the application-speciï¬c tools, which outperformed CoT. Our hypothesis is that if the LLM makes one mistake in the thinking chain, then the whole response would be wrong because correcting its own mistakes is hard. | 2307.03875#57 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 58 | Overuse of prompt engineering: While prompt engineering can often lead to improved results, overdoing it can sometimes lead to worse outcomes. When the prompts become too complex or too speciï¬c, the LLM might not understand them correctly or might overï¬t to the speciï¬c prompt structure, limiting its ability to handle a variety of questions.
# C Coï¬ee Distribution Example
# C.1 Code
import time
from gurobipy import GRB, Model
# Example data
capacity_in_supplier = {âsupplier1â: 150, âsupplier2â: 50, âsupplier3â: 100}
shipping_cost_from_supplier_to_roastery = {
(âsupplier1â, âroastery1â): 5, (âsupplier1â, âroastery2â): 4, (âsupplier2â, âroastery1â): 6, (âsupplier2â, âroastery2â): 3, (âsupplier3â, âroastery1â): 2, (âsupplier3â, âroastery2â): 7
} | 2307.03875#58 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 59 | }
roasting_cost_light = {âroastery1â: 3, âroastery2â: 5}
roasting_cost_dark = {âroastery1â: 5, âroastery2â: 6}
# shipping_cost_from_roastery_to_cafe = {
(âroastery1â, âcafe1â): 5, (âroastery1â, âcafe2â): 3, (âroastery1â, âcafe3â): 6, (âroastery2â, âcafe1â): 4, (âroastery2â, âcafe2â): 5, (âroastery2â, âcafe3â): 2
}
light_coffee_needed_for_cafe = {âcafe1â: 20, âcafe2â: 30, âcafe3â: 40}
dark_coffee_needed_for_cafe = {âcafe1â: 20, âcafe2â: 20, âcafe3â: 100}
24 | 2307.03875#59 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 60 | 24
cafes = list(set(i[1] for i in shipping_cost_from_roastery_to_cafe.keys())) roasteries = list(
set(i[1] for i in shipping_cost_from_supplier_to_roastery.keys())) suppliers = list(
set(i[0] for i in shipping_cost_from_supplier_to_roastery.keys()))
# OPTIGUIDE DATA CODE GOES HERE
# Create a new model model = Model("coffee_distribution")
# Create variables x = model.addVars(shipping_cost_from_supplier_to_roastery.keys(),
vtype=GRB.INTEGER, name="x") y_light = model.addVars(shipping_cost_from_roastery_to_cafe.keys(), vtype=GRB.INTEGER, name="y_light") y_dark = model.addVars(shipping_cost_from_roastery_to_cafe.keys(), vtype=GRB.INTEGER, name="y_dark")
# # Set objective model.setObjective(
sum(x[i] * shipping_cost_from_supplier_to_roastery[i] | 2307.03875#60 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 61 | # # Set objective model.setObjective(
sum(x[i] * shipping_cost_from_supplier_to_roastery[i]
# for i in shipping_cost_from_supplier_to_roastery.keys()) +
sum(roasting_cost_light[r] * y_light[r, c] + roasting_cost_dark[r] * y_dark[r, c] for r, c in shipping_cost_from_roastery_to_cafe.keys()) + sum(
(y_light[j] + y_dark[j]) * shipping_cost_from_roastery_to_cafe[j] for j in shipping_cost_from_roastery_to_cafe.keys()), GRB.MINIMIZE)
# Conservation of flow constraint for r in set(i[1] for i in shipping_cost_from_supplier_to_roastery.keys()):
for r in set(il[1] for i in shipping_cost_from_supplier_to_roastery.keys()): model. addConstr (
model.addConstr( sum(x[i] | 2307.03875#61 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 62 | model.addConstr( sum(x[i]
for i in shipping_cost_from_supplier_to_roastery.keys() if i[1] == r) == sum( y_light[j] + y_dark[j] for j in shipping_cost_from_roastery_to_cafe.keys() if j[0] == r), f"flow_{r}")
# Add supply constraints for s in set(i[0] for i in shipping_cost_from_supplier_to_roastery.keys()):
for s in set(il[0] for i in shipping_cost_from_supplier_to_roastery.keys()): model .addConstr (
model.addConstr( sum(x[i]
for i in shipping_cost_from_supplier_to_roastery.keys() if i[0] == s) <= capacity_in_supplier[s], f"supply_{s}")
25
# Add demand constraints for c in set(i[1] for i in shipping_cost_from_roastery_to_cafe.keys()):
model.addConstr(
# sum(y_light[j] | 2307.03875#62 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 63 | model.addConstr(
# sum(y_light[j]
for j in shipping_cost_from_roastery_to_cafe.keys() if j[1] == c) >= light_coffee_needed_for_cafe[c],
# f"light_demand_{c}")
# model.addConstr(
# sum(y_dark[j]
for j in shipping_cost_from_roastery_to_cafe.keys() if j[1] == c) >= dark_coffee_needed_for_cafe[c], f"dark_demand_{c}")
# # Optimize model model.optimize() m = model
# OPTIGUIDE CONSTRAINT CODE GOES HERE
# Solve m.update() model.optimize()
print(time.ctime()) if m.status == GRB.OPTIMAL:
print(fâOptimal cost: {m.objVal}â)
# else:
print("Not solved to optimality. Optimization status:", m.status)
# C.2 Question and Ground Truth Macros | 2307.03875#63 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 64 | # else:
print("Not solved to optimality. Optimization status:", m.status)
# C.2 Question and Ground Truth Macros
QUESTION: What would happen if demand at cafe {{VALUE-CAFE}} increased by {{VALUE-NUMBER}}%? VALUE-CAFE: random.choice(cafes) VALUE-NUMBER: random.randrange(5,30) DATA CODE: light_coffee_needed_for_cafe[{{VALUE-CAFE}}] =
light_coffee_needed_for_cafe[{{VALUE-CAFE}}] * (1 + {{VALUE-NUMBER}}/100) dark_coffee_needed_for_cafe[{{VALUE-CAFE}}] = \
dark_coffee_needed_for_cafe[{{VALUE-CAFE}}] * (1 + {{VALUE-NUMBER}}/100) TYPE: demand-increase
QUESTION: What if demand for light coffee at cafe {{VALUE-CAFE}} increased by {{VALUE-NUMBER}}%? VALUE-CAFE: random.choice(cafes) VALUE-NUMBER: random.randrange(5,30) DATA CODE: light_coffee_needed_for_cafe[{{VALUE-CAFE}}] = \ | 2307.03875#64 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 66 | QUESTION: Why are we using supplier {{VALUE-SUPPLIER}} for roasting facility {{VALUE-ROASTERY}}? VALUE-SHIPPINGS: [(s, r) for (s, r), value in x.items() if value.X >= 0.999] VALUE-IDX: random.randint(0, len({{VALUE-SHIPPINGS}}) - 1) VALUE-SUPPLIER: {{VALUE-SHIPPINGS}}[{{VALUE-IDX}}][0] VALUE-ROASTERY: {{VALUE-SHIPPINGS}}[{{VALUE-IDX}}][1] CONSTRAINT CODE: m.addConstr(x[{{VALUE-SUPPLIER}},{{VALUE-ROASTERY}}] == 0, "_") TYPE: supply-roastery | 2307.03875#66 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 67 | QUESTION: Assume cafe {{VALUE-CAFE}} can exclusively buy coffee from roasting facility {{VALUE-ROASTERY}}, and conversely, roasting facility {{VALUE-ROASTERY}} can only sell its coffee to cafe {{VALUE-CAFE}}. How does that affect the outcome? VALUE-ROASTERY: random.choice(roasteries) VALUE-CAFE: random.choice(cafes) CONSTRAINT CODE: for c in cafes: if c != {{VALUE-CAFE}}: m.addConstr(y_light[{{VALUE-ROASTERY}}, c] == 0, "_") m.addConstr(y_dark[{{VALUE-ROASTERY}}, c] == 0, "_") for r in roasteries: if r != {{VALUE-ROASTERY}}: m.addConstr(y_light[r,{{VALUE-CAFE}}] == 0, "_") m.addConstr(y_dark[r,{{VALUE-CAFE}}] == 0, "_")
# TYPE: exclusive-roastery-cafe | 2307.03875#67 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 68 | # TYPE: exclusive-roastery-cafe
QUESTION: What if roasting facility {{VALUE-ROASTERY}} can only be used for cafe {{VALUE-CAFE}}? VALUE-ROASTERY: random.choice(roasteries) VALUE-CAFE: random.choice(cafes) CONSTRAINT CODE: for c in cafes:
# if c != {{VALUE-CAFE}}:
m.addConstr(y_light[{{VALUE-ROASTERY}}, c] == 0, "_") m.addConstr(y_dark[{{VALUE-ROASTERY}}, c] == 0, "_")
# TYPE: incompatible-roastery-cafes
QUESTION:
27
What if supplier {{VALUE-SUPPLIER}} can now provide only half of the quantity? VALUE-SUPPLIER: random.choice(suppliers) DATA CODE: capacity_in_supplier[{{VALUE-SUPPLIER}}] = capacity_in_supplier[{{VALUE-SUPPLIER}}]/2 TYPE: supplier-capacity | 2307.03875#68 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 69 | QUESTION: The per-unit cost from supplier {{VALUE-SUPPLIER}} to roasting facility {{VALUE-ROASTERY}} is now {{VALUE-NUMBER}}. How does that affect the total cost? VALUE-SUPPLIER: random.choice(suppliers) VALUE-ROASTERY: random.choice(roasteries) VALUE-NUMBER: random.randrange(1,10) DATA CODE: shipping_cost_from_supplier_to_roastery[{{VALUE-SUPPLIER}},{{VALUE-ROASTERY}}] = {{VALUE-NUMBER}}
TYPE: supplier-roastery-shipping
QUESTION: What would happen if roastery 2 produced at least as much light coffee as roastery 1? CONSTRAINT CODE: m.addConstr(sum(y_light[âroastery1â,c] for c in cafes)
<= sum(y_light[âroastery2â,c] for c in cafes), "_") TYPE: light-quantities-roasteries
QUESTION: What would happen if roastery 1 produced less light coffee than roastery 2? CONSTRAINT CODE: m.addConstr(sum(y_light[âroastery1â,c] for c in cafes) | 2307.03875#69 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 70 | <= sum(y_light[âroastery2â,c] for c in cafes) - 1, "_") TYPE: light-quantities-roasteries
QUESTION: What will happen if supplier 1 ships more to roastery 1 than roastery 2? CONSTRAINT CODE: m.addConstr(x[âsupplier1â,âroastery1â] >= x[âsupplier1â,âroastery2â] + 1, "_") TYPE: shipping-quantities-roasteries
QUESTION: What will happen if supplier 1 ships to roastery 1 at least as much as to roastery 2? CONSTRAINT CODE: m.addConstr(x[âsupplier1â,âroastery1â] >= x[âsupplier1â,âroastery2â], "_") TYPE: shipping-quantities-roasteries
QUESTION: Why not only use a single supplier for roastery 2? CONSTRAINT CODE: z = m.addVars(suppliers, vtype=GRB.BINARY, name="z") m.addConstr(sum(z[s] for s in suppliers) <= 1, "_") for s in suppliers:
28 | 2307.03875#70 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 72 | Plot demands in East US that docked between the Memorial Day and the Independence Day. & Inserting the following line for analysis: from datetime import date date filter = lambda ideal, actual: actual >= date(2025, 5, 29) and actual <= date(2023, 7, 4) demend filter = lembda d: âcost us" in f*{d.region nome} {d.geo neme} {d.country_name}".lower() plot_path - plot_demand_date(data, demand_filter-demand_filter, date filter-date filter, is_swap_allowed-True, max_demands-10) & l= On Ideal Date (mm Dock Early Mm Dock Late & Demand ID Plot demands in Continent ABC I'm sorry, but "Continent ABC" is not a valid continent name. Please provide a valid continent name such as Africa, Asia, Europe, North America, South America, Australia, or Antarctica, and | will be happy to help you plot the demands for that continent. Plot demands in data center XYZ & Inserting the following line for analysis: filter = lambda Gd: "XYZ" in f*{d.data_center_name}* plot_path = plot_demand_date(data, | 2307.03875#72 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03875 | 73 | line for analysis: filter = lambda Gd: "XYZ" in f*{d.data_center_name}* plot_path = plot_demand_date(data, demand_filter=filter, max_demands=5@) Sorry, | cannot answer your question because | encountered an error while writing and running code. Code Line Explanation ion to filter the cr = lambda d: âXY ata_center_nam This line tries to plot the demands using the filter function created earlier, but it fails because the filter function n invalid attribute âdata. . nibuieError. Please provide the correct attnbute name for the data center in the "ForecastOrder object, or nd administrator for assistance. nclusion: Th de failed to plot the demands in data center XYZ due to an incorrect attribute name in the filter function Type your message here... | 2307.03875#73 | Large Language Models for Supply Chain Optimization | Supply chain operations traditionally involve a variety of complex decision
making problems. Over the last few decades, supply chains greatly benefited
from advances in computation, which allowed the transition from manual
processing to automation and cost-effective optimization. Nonetheless, business
operators still need to spend substantial efforts in explaining and
interpreting the optimization outcomes to stakeholders. Motivated by the recent
advances in Large Language Models (LLMs), we study how this disruptive
technology can help bridge the gap between supply chain automation and human
comprehension and trust thereof. We design OptiGuide -- a framework that
accepts as input queries in plain text, and outputs insights about the
underlying optimization outcomes. Our framework does not forgo the
state-of-the-art combinatorial optimization technology, but rather leverages it
to quantitatively answer what-if scenarios (e.g., how would the cost change if
we used supplier B instead of supplier A for a given demand?). Importantly, our
design does not require sending proprietary data over to LLMs, which can be a
privacy concern in some circumstances. We demonstrate the effectiveness of our
framework on a real server placement scenario within Microsoft's cloud supply
chain. Along the way, we develop a general evaluation benchmark, which can be
used to evaluate the accuracy of the LLM output in other scenarios. | http://arxiv.org/pdf/2307.03875 | Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, Ishai Menache | cs.AI, cs.CL, cs.DM, cs.LG | null | null | cs.AI | 20230708 | 20230713 | [
{
"id": "2302.13971"
},
{
"id": "2203.05132"
},
{
"id": "1810.04805"
},
{
"id": "2112.08674"
},
{
"id": "2108.07258"
},
{
"id": "2306.11644"
},
{
"id": "2204.05999"
},
{
"id": "2305.12050"
},
{
"id": "2305.17126"
},
{
"id": "2304.11477"
},
{
"id": "2107.03374"
},
{
"id": "2201.11990"
},
{
"id": "2303.03378"
},
{
"id": "2305.14552"
},
{
"id": "2302.06144"
},
{
"id": "2306.05499"
},
{
"id": "2010.00711"
},
{
"id": "2301.00234"
},
{
"id": "2303.08896"
},
{
"id": "2303.06689"
},
{
"id": "2305.05176"
},
{
"id": "2212.10400"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2104.08691"
},
{
"id": "2302.12813"
}
] |
2307.03762 | 2 | # 1Beijing Insitute for General Artiï¬cial Intelligence (BIGAI) 2Peking University
In this perspective paper, we ï¬rst comprehensively review existing evaluations of Large Language Models (LLMs) using both standardized tests and ability-oriented benchmarks. We pinpoint several problems with current evaluation methods that tend to overstate the capabilities of LLMs. We then articulate what artiï¬cial general intelligence should encompass beyond the capabilities of LLMs. We propose four characteristics of generally intelligent agents: 1) they can perform unlimited tasks; 2) they can generate new tasks within a context; 3) they operate based on a value system that underpins task generation; and 4) they have a world model reï¬ecting reality, which shapes their interaction with the world. Building on this viewpoint, we highlight the missing pieces in artiï¬cial general intelligence, that is, the unity of knowing and acting. We argue that active engagement with objects in the real world delivers more robust signals for forming conceptual representations. Additionally, knowledge acquisition isnât solely reliant on passive input but requires repeated trials and errors. We conclude by outlining promising future research directions in the ï¬eld of artiï¬cial general intelligence.
# 1. Introduction
Those who âknowâ but do not act simply do not yet know. â Wang Yang-Ming (Wang, 1963) | 2307.03762#2 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 3 | In his famous thought experiment, âBrain in a Vat (BiV)â, Hilary Putnam introduced a hypothetical situation where each personâs brain is detached from their body and sustained with nutrients while the neurons are connected to the wires of a powerful supercomputer (Putnam et al., 1981). This computer generates a convincing illusion, making individuals believe that everything is functioning as usual as they are in the real world. For example, dwellers in the BiV world can eat, work, sleep, communicate, and interact with the environment while having the normal conscious experience. The only difference is that the stimulating reality is generated from the electric impulses of the supercomputer rather than the neural responses to the objects and events from the world. Putnam refuted the hypothesis by investigating the ability of grounding â the connection between word and world. He argued that BiV world-dwellers would never grasp the meaning of words even though they can speak ï¬uently and write correctly given that they cannot connect those representations with real-world objects. For example, the dweller may claim that he is a brain in a vat, but the words âbrainâ and âvatâ in his disembodied context do not correspond exactly | 2307.03762#3 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 4 | may claim that he is a brain in a vat, but the words âbrainâ and âvatâ in his disembodied context do not correspond exactly to the semantics of âbrainâ and âvatâ in the real world (Putnam et al., 1981). The BiV hypothesis offers us a valuable lens to look into the current status of Large Language Models (LLMs). We argue that the current LLMs are no more than the Brain in a Vat because of their inherent construction process â statistical modeling that analyzes patterns in massive text corpora to predict linguistic relationships and generate responses based on the previous tokens (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023). As a result, their output is conï¬ned within the training data and they cannot establish a connection between symbols and real-world entities (Mahowald et al., 2023; Mitchell and Krakauer, 2023). | 2307.03762#4 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 5 | *Correspondence: [email protected]
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
Rumor has it that we have witnessed âsparks of artiï¬cial general intelligenceâ (Bubeck et al., 2023). However, we argue that our perception of intelligence in LLMs is largely due to our human ability for extracting semantics from even meaningless symbols they generate (Bender et al., 2021). One famous example dates back to Chomsky (2009), where the syntactically well-formed sentence
âColorless green ideas sleep furiouslyâ,
seemingly semantically nonsensical, could be made sense of by human subjects, even considered philosophical (Levinovitz, 2017). | 2307.03762#5 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 6 | âColorless green ideas sleep furiouslyâ,
seemingly semantically nonsensical, could be made sense of by human subjects, even considered philosophical (Levinovitz, 2017).
Symbol grounding, as demonstrated above between even the symbolically meaningless combination of words and its assigned semantics of reality from humans, is so important that when the correspondence between symbols and objects is broken, a purely symbolic language speaker will be easily trapped in a merry-go-around situation, where a set of symbols map to another set of symbols with no breaking points from grounded semantics (Harnad, 1990). The theory is echoed in a preliminary experiment regarding a conversation between two GPT-4 sessions shown in Fig. 1. In this conversation, we started it off with a brief statement about the weather (âIt is hot todayâ) and then allowed the two GPT-4 sessions to continually chat with each other. The conversation quickly converged to expressing thanks after several exchanges. With slight variation only in wording, the language model started repeating itself, resulting in the so-called linguistic merry-go-round.
# GPT-4 Self-Talk | 2307.03762#6 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 7 | # GPT-4 Self-Talk
Starter: It is hot today. GPT-4 Session 1: Yes, it seems like a hot day today! Stay hydrated and try to stay cool. Make sure you wear sunscreen if youâre heading outdoors. GPT-4 Session 2: I appreciate the advice! Iâll deï¬nitely drink plenty of water and wear sunscreen. Maybe itâs a good day for some ice cream as well. GPT-4 Session 1: Youâre welcome! Enjoying some ice cream sounds like a great idea on a hot day like this. Have fun and stay cool! GPT-4 Session 2: Thank you! Iâll make sure to have a fantastic time and stay cool. Enjoy your day too! ...... GPT-4 Session 1: Thank you! If you have any questions or need assistance in the future, feel free to ask. Take care and have a wonderful day! GPT-4 Session 2: Thanks! I appreciate your support and readiness to help. Have a fantastic "day" and see you next time! | 2307.03762#7 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 8 | Fig. 1: The conversion between two GPT-4 sessions after a given conversation starter. The two language models quickly converge into expressing thanks (denoted as yellow) for each other after only one round without breaking the loop even after ï¬ve rounds, echoing the theory that without grounding, dialogues will enter an inï¬nite loop.
Another reason why symbol grounding is critical is that human intelligence is intrinsically linked to our sensory experience and physical interaction with the world (Glenberg, 2010; Lakoff et al., 1999; Smith and Gasser, 2005). We create symbols to represent objects in the real world, enabling the preservation and transmission of knowledge across generations; however, the initial process of acting to create novel experiences and turn them into formal knowledge shall not be missed. In fact, action and knowledge are inherently connected; our profound understanding of the world is not through simply reading manuals but rather via repeated trials and errors and collected knowledge, either directly from tasks at hand or by abstracting and transferring insights from othersâ experiences. While knowledge reï¬ects our abilities to interact with the world (e.g., reasoning, problem-solving, social understanding), models that simply compress the static knowledge and generate relevant output from statistical correlations do not know how to act in novel scenarios (see Sec. 4 for discussions). | 2307.03762#8 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 9 | In this perspective paper, we ï¬rst conduct a review of the existing evaluation of LLMs from both standardized exams and ability-oriented benchmarks. We also discuss the potential issues with the current evaluation methodology that lead to the impression of LLMs being âomnipotentâ. We then point out what artiï¬cial general
2
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
intelligence should be beyond LLMs. Based on this view, we further present our vision of the missing pieces in artiï¬cial general intelligence, i.e., the unity of knowing and acting, and discuss potential directions towards this goal. We argue that it is through ongoing interaction with the world, e.g., via trials and errors, that agents crystallize the experiences into knowledge and generalize it to the new context. We conclude the work by presenting future directions we believe are fruitful for artiï¬cial general intelligence research.
# 2. Evaluation of Large Language Models | 2307.03762#9 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 10 | # 2. Evaluation of Large Language Models
Throughout the evolution of LLMs, numerous tasks have been introduced to measure âintelligenceâ. Broadly, these tasks fall into two categories: those that evaluate intelligence based on exam-style assessments, and those that gauge it based on capabilities similar to human abilities. In the exam-based approach, researchers utilize standardized tests of different subjects commonly administered to human test takers, e.g., SAT, GRE, LSAT and Gaokao (China College Entrance Exam), to measure the performance of LLMs. This allows for a direct comparison of their performance relative to human performance (Zhong et al., 2023). In the other type of evaluation based on human-like ability, researchers construct tasks that probe into a speciï¬c ability that is commonly believed to be characteristic of human intelligence, e.g., reasoning, theory of mind, and problem-solving (Binz and Schulz, 2023; Jiang et al., 2022; Kosinski, 2023; Shiffrin and Mitchell, 2023). These abilities are usually well-studied psychological constructs that serve as the cornerstones of human intelligence.
# 2.1. Performance on Standardized Test | 2307.03762#10 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 11 | OpenAI (2023) provided the ï¬rst peek view into how GPT-4 performs in standardized tests used in the US (reproduced in Tab. 1). At ï¬rst sight, GPT-4 has achieved remarkable performance compared to human participants in tests such as SAT, LSAT, Bar Exam, and GRE Verbal. However, another recent bilingual benchmark (Zhong et al., 2023) found that GPT-4 could be largely biased to subjects with sufï¬cient data (reproduced in Fig. 2 and Tab. 2): on tests for the English language and others with sufï¬cient sources for training, such as SAT, the model excels the human counterparts, reaching nearly top performers. However, for tasks with less data and designed for reasoning rather than language usage, the model starts to fare much worse, as evidenced in its performance on Lawyer Qualiï¬cation Test, Civil Service Exam, and Gaokao. The results suggest that the seemingly excellent performance may be the result of a cramming strategy â memorizing via repetition, but the model has not learned how to perform reasoning. We also ï¬nd that GPT-4 performs better in subjects that a | 2307.03762#11 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 12 | via repetition, but the model has not learned how to perform reasoning. We also ï¬nd that GPT-4 performs better in subjects that a cramming strategy is usually effective such as history, geography, and biology while its performance drops in subjects that require strong reasoning and problem-solving skills such as mathematics, physics, and chemistry. The community has since followed up on this stream of research. In Zhang et al. (2023), the China College Entrance Exam is analyzed in detail, with extremely low scores in every subject except English, and the total scores way lower than average (see Fig. 3). In Arora et al. (2023), Joint Entrance Examination (JEE) Advanced exam, held annually in India as an entrance exam for Indiaâs premier engineering institutes, is investigated (see Tab. 3). Similar to Gaokao, GPT-4 struggles in JEEBench-Math, solving close to a mere 20% problems. Almost the same conclusion is reached in Vietnamese High School Graduation Examination (Xuan-Quy et al., 2023), where LLMs are found to be ï¬uent in literature, English, history, geography, and civics education, but show large gaps | 2307.03762#12 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 13 | et al., 2023), where LLMs are found to be ï¬uent in literature, English, history, geography, and civics education, but show large gaps in mathematics, physics, chemistry, and biology (see Fig. 4). Itâs important to note that while the performance of Large Language Models (LLMs) on standardized tests can serve as an indicator of their abilities, it should not be considered the ultimate measure of their competencies. Using these test scores to generalize their capabilities across various cognitive tasks and real-world applications may be misleading as standardized tests have witnessed much criticism regarding their validity and reliability | 2307.03762#13 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 15 | Exam GPT-4 GPT-4 (no vision) Uniform Bar Exam (MBE+MEE+MPT) 298 / 400 (~90th) 298 / 400 (~90th) LSAT 163 (~88th) 161 (~83rd) SAT Evidence-Based Reading & Writing 710 / 800 (~93rd) 710 / 800 (~93rd) SAT Math 700 / 800 (~89th) 690 / 800 (~89th) 163 / 170 (~80th) 157 / 170 (~62nd) Graduate Record Examination (GRE) Verbal 169 / 170 (~99th) 165 / 170 (~96th) Graduate Record Examination (GRE) Writing 4 / 6 (~54th) 4 / 6 (~54th) USABO Semiï¬nal Exam 2020 87 / 150 (99th - 100th) 87 / 150 (99th - 100th) USNCO Local Section Exam 2022 36 / 60 38 / 60 Medical Knowledge Self-Assessment Program 75 % 75 % 53 % Codeforces Rating 392 (below 5th) 392 (below 5th) AP Art History 5 (86th - 100th) 5 (86th - 100th) AP Biology 5 (85th - 100th) 5 (85th - 100th) AP Calculus BC 4 (43rd - 59th) 4 | 2307.03762#15 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 16 | 5 (86th - 100th) AP Biology 5 (85th - 100th) 5 (85th - 100th) AP Calculus BC 4 (43rd - 59th) 4 (43rd - 59th) AP Chemistry 4 (71st - 88th) 4 (71st - 88th) AP English Language and Composition 2 (14th - 44th) 2 (14th - 44th) AP English Literature and Composition 2 (8th - 22nd) 2 (8th - 22nd) AP Environmental Science 5 (91st - 100th) 5 (91st - 100th) AP Macroeconomics 5 (84th - 100th) 5 (84th - 100th) AP Microeconomics 5 (82nd - 100th) 4 (60th - 82nd) AP Physics 2 4 (66th - 84th) 4 (66th - 84th) AP Psychology 5 (83rd - 100th) 5 (83rd - 100th) AP Statistics 5 (85th - 100th) 5 (85th - 100th) AP US Government 5 (88th - 100th) 5 (88th - 100th) AP US History 5 (89th - 100th) 4 (74th - 89th) AP World History 4 (65th - 87th) 4 (65th - 87th) AMC 10 30 / 150 (6th - 12th) 36 | 2307.03762#16 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 17 | (74th - 89th) AP World History 4 (65th - 87th) 4 (65th - 87th) AMC 10 30 / 150 (6th - 12th) 36 / 150 (10th - 19th) AMC 12 60 / 150 (45th - 66th) 48 / 150 (19th - 40th) Introductory Sommelier (theory knowledge) 92 % 92 % 80 % Certiï¬ed Sommelier (theory knowledge) 86 % 86 % 58 % Advanced Sommelier (theory knowledge) 77 % 77 % 46 % Leetcode (easy) 31 / 41 31 / 41 Leetcode (medium) 21 / 80 21 / 80 8 / 80 Leetcode (hard) 3 / 45 3 / 45 0 / 45 | 2307.03762#17 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 18 | GPT-3.5
213 / 400 (~10th)
149 (~40th)
670 / 800 (~87th)
590 / 800 (~70th)
147 / 170 (~25th)
154 / 170 (~63rd)
4 / 6 (~54th)
43 / 150 (31st - 33rd)
# 260 (below 5th)
5 (86th - 100th)
4 (62nd - 85th)
1 (0th - 7th)
2 (22nd - 46th)
2 (14th - 44th)
2 (8th - 22nd)
5 (91st - 100th)
2 (33rd - 48th)
4 (60th - 82nd)
3 (30th - 66th)
5 (83rd - 100th)
3 (40th - 63rd)
4 (77th - 88th)
4 (74th - 89th)
4 (65th - 87th)
36 / 150 (10th - 19th)
30 / 150 (4th - 8th)
Table 1: GPT-4 performance (percentile) on academic and professional exams adopted from OpenAI (2023).
4
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
Cina College Entrance Exam Lawyer Qualification Test Lsar (60) 80100 wil Service Exam âGMAT & GRE Math Competition | 2307.03762#18 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 19 | Cina College Entrance Exam Lawyer Qualification Test Lsar (60) 80100 wil Service Exam âGMAT & GRE Math Competition
# =e Avg. Human Performance <== Top Human Performance = GPT-4
# =e ChatcPT == Text-Davinci-003
Chinese ath y » â¢~ fi 7] Prysies ath Biology Poles History Geography
Fig. 2: Relative performance of models compared to humans in AGIEval. Figure adopted from Zhong et al. (2023).
Fig. 3: The scores of subjective and objective questions in each subject of ChatGPT in Gaokao. Figure adopted from Zhang et al. (2023) | 2307.03762#19 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 23 | Table 2: Performance of LLMs on 20 tasks under few-shot and few-shot CoT settings in AGIEval. Table adopted from Zhong et al. (2023)
Random GPT-3 GPT-3.5 GPT-4 GPT-4+CoT GPT-4+CoT+SC 0.428 0.334 0.231 0.316
Table 3: This table shows the score obtained by all the models on JEEBench aggregated by subject. Table adopted from Arora et al. (2023).
# (Visone, 2010) .
The consistent observation that LLMs perform well in language usage exams but struggle with problem-solving implies that reasoning should not be purely implemented with System 1 responses, which only involve quick,
5
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
92.4 90% 88.5 85.5 80% 79.2 79.5 70% 68 66 69 70.5 60% 58.8 60 56.8 61 58 56.5 61.5 52.5 50% 48 M athe m atics Literature English Physics C he mistry ChatGPT Biology BingChat History G eography Civic Education | 2307.03762#23 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 24 | Fig. 4: Comparison of ChatGPT and BingChat performances on VNHSGE dataset. Figure adopted from Xuan-Quy et al. (2023).
intuitive decisions learned from associations; instead, it necessitates the engagement of System 2 processing, characterized by deliberate and analytical contemplation, to be effectively realized (Daniel, 2013).
# 2.2. Ability-Oriented Benchmarks
Apart from standardized tests widely administered to human subjects for talent selection, a wide array of ability-oriented tests have been conducted on LLMs to probe if they possess human-level general intelligence. In the following, we detail seven different areas where extensive work has been conducted for investigation.
# 2.2.1. Mathematical Reasoning
Mathematical reasoning is one of the most studied domains in LLMs in terms of its reasoning ability. Ever since the success of Chain-of-Thought prompting (Wei et al., 2022), a variety of prompting-based methods have been proposed for eliciting the mathematical reasoning ability for the math word problem (Chen et al., 2022; Gao et al., 2022; Uesato et al., 2022). | 2307.03762#24 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 25 | Bubeck et al. (2023) even claims that GPT-4 could âsolveâ an IMO-level problem. However, Bubeck et al. (2023)âs conclusion could be extremely misleading, as the problem has been signiï¬cantly simpliï¬ed. Fig. 5 shows the original problem (Problem 2 in IMO 2022) and GPT-4âs response. While it is not hard to guess that f (x) = 1 x is a solution, the stream of reasoning is ï¬awed. From GPT-4âs reply, the equality is only reached when we assume continuity, which is not guaranteed based on the conditions. Besides, even when we reach x f (g(x)) + g(x) f (x) = 2, we need to recursively apply g(·) to obtain g(x) f (g(g(x))) + g(g(x)) f (g(x)) = 2. The boldest claim is that x = g(g(x)), by comparing the structure of the two equations. However, it is not possible to reach x f (g(x)) = g(x) f (x), let alone x = g(x). Therefore, the proving | 2307.03762#25 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 26 | However, it is not possible to reach x f (g(x)) = g(x) f (x), let alone x = g(x). Therefore, the proving shall be considered completely wrong. A more comprehensive study on mathematical reasoning is conducted in Wu et al. (2023), where researchers use slightly harder level-5 problems in the MATH dataset (Hendrycks et al., 2021) for evaluation. However, even under a chat setup, the model can still only reach about 40% accuracy overvall (see Tab. 4). Note that the chat setting is far from the ideal automation we want in a tutoring scenario. In the end, it is us teaching the model rather than the model teaching the juniors. | 2307.03762#26 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 28 | GPT-4:The conditions imposed on the function f imply some nontrivial properties that the function should have. Letâs use these conditions to derive the properties of f. The condition means that for every x ⬠IR*, there is exactly one y ⬠R* such that xf(y) +yf(x) <2. Since x and y are both positive real s, the only way for this inequality to be satisfied is if xf(y) +yf(x) = 2. If xf(y) +yf(x) < 2 for any x,y ⬠IR*, then we can increase y slightly to make the left-hand side closer to 2, contradicting the fact that there is exactly one y satisfying the condition for each x. So the condition is equivalent to saying that for every x ⬠IRâ¢, there is exactly one y ⬠IR⢠such that xf(y) +yf(x) = 2. Letâs denote y as g(x), where g is a function from IR* to Rt. So xf(g(x)) +a(s) f(x) =2. Since the statement holds for all positive real numbers x, then we can interchange x and g(x) to get g(x) f(x) | 2307.03762#28 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 29 | =2. Since the statement holds for all positive real numbers x, then we can interchange x and g(x) to get g(x) f(x) +.xf(g(x)) =2. So, we have xf(g(x)) = g(x) f(x). which means x = g(x) for all x ⬠IRâ, or in other words, the function g is the identity function. Therefore, we have 2x f(x) = 2 for all x ⬠R*. So f(x) = 4 forallx e Rt. ~ Xe So the only function satisfying the given condition is f(x) = t for all x ⬠IR*. To check this, we substitute it back into the original equation and see if it holds. For f(x) = 4 we have x(t) ty(t) = 2, which is true for all x,y ⬠IR* where x = y. Therefore, f(x) = ; is indeed the only solution. | 2307.03762#29 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 30 | have x(t) ty(t) = 2, which ; is indeed the only solution.
= 2, which is true for all x,y ⬠IR* where x = y.
Fig. 5: GPT-4âs reponse on the original IMO 2022 Problem 2, without simpliï¬cation. Flaws denoted in red.
Algebra 307 C.Prob 123 I.Alg 280 N.Theory 154 Prealg 193 Precalc 135 Total 1192 MathChat PoT PS Vanilla 59.93% 52.03% 17.85% 60.39% 50.41% 42.67% 54.55% 17.50% 44.71% 20.36% 61.03% 43.32% 28.57% 2.86% 25.20% 46.58% 60.10% 19.26% 44.71% 52.33% 16.30% 37.67% 55.96% 18.52% 39.60% 28.69% 7.41% 54.92%
Table 4: Accuracy on all the problems with difï¬culty level-5 from different categories of the MATH dataset with different methods. Table adopted from Wu et al. (2023).
# 2.2.2. Logical Reasoning | 2307.03762#30 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 31 | # 2.2.2. Logical Reasoning
Logical reasoning could be considered the underlying mechanism that supports various forms of complex reasoning. In Xu et al. (2023), deductive, abductive, and inductive reasoning are studied in-depth, respectively. Based on the evaluation results (reproduced in Fig. 6), the LLMs are good at deductive reasoning but fare much worse in ones that require applying existing knowledge or forming knowledge from experiences.
Recent work shows that even the seemingly satisfactory performance of LLMs in deductive reasoning is rooted in its semantic understanding rather than its symbolic understanding (Tang et al., 2023b). Tang et al. (2023b) consider the interesting setting where the semantic words in logical reasoning problems are replaced with random symbols but the logical reasoning chain is still kept. An example is shown in Fig. 7. Surprisingly, after the change, the performance of LLMs drops to only close to a random level (see Tab. 5). The drastic performance cliff indicates that while language is the interface for communication, computation underneath is not solely conducted in a textual format.
7
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
0.8 20.6 5 B04. 8 < 02 0 Davinei-003 - ni Le ix ChatGPT Deductive BARD <~~ Abduetive Inductive | 2307.03762#31 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 33 | Given a set of rules and facts, you have to | Given a set of rules and facts, you have reason whether a statement is true or false. | to reason whether a statement is true or Here are some facts and rules: false. Here are some facts and rules: The bear likes the dog. The cow is round. The 4 likes the e5. The cow likes the bear. The el4 is ©2. The e14 likes the e4. The e14 needs the e4. The e5 needs the e26. The cow needs the bear. The dog needs the squirrel. The dog sees the cow. The squirrel needs the dog. If someone is round then they like the squirrel. If the bear is round and the bear likes the squirrel then the squirrel needs the bear. If the cow needs the dog then the cow is cold. Does it imply that the statement "The cow likes the squirrel." is True? The e5 sees the el4. The e26 needs the e5. If someone is ©? then they like the e26. If the e4 is ©2 and the e4 likes the e26 then the e26 needs the e4. If the e14 needs the e5 then the e14 is el. Does it imply that the statement "The el4 likes the e26." is True? | 2307.03762#33 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 34 | Fig. 7: Decoupling semantics from a ProofWriter task. In the original ProofWriter task, entities are represented by their names (left). However, in their decoupled setting, the entity names are replaced with unique entity IDs (right). Figure adopted from Tang et al. (2023b).
# 2.2.3. Causal Reasoning
Causal inference is studied in a similar approach as done in Tang et al. (2023b) by Jin et al. (2023). Jin et al. (2023) create a new dataset called Corr2Cause that peels off the semantics in causal reasoning and transforms the questions into primarily symbol-based descriptions. Fig. 8 shows one example of the data construction process. Compared to the existing causal NLP evaluation dataset, Corr2Cause tests pure causal inference instead of empirical knowledge. In experiments shown in Tab. 6, the authors ï¬nd that LLMs achieve almost close to random performance on the task. Besides, after ï¬netuning, those models can only perform causal inference in in-distribution settings when variable names and textual expressions used in the queries are similar to those in the training set, but fail in out-of-distribution settings generated by perturbing these queries.
8
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models | 2307.03762#34 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 35 | 8
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
Category Model Baseline deduction induction abduction Symbols ChatGPT GPT-4 Zero-Shot Zero-Shot-CoT Few-Shot-CoT Zero-Plus-Few-Shot-CoT Zero-Shot Zero-Shot-CoT Few-Shot-CoT 52.6 55.7 54.8 55.7 68.8 71.1 67.6 6.10 7.86 - - 9.28 8.93 - 1.50 4.90 18.2 - 25.0 31.2 44.2 Semantics ChatGPT GPT-4 Zero-Shot Zero-Shot-CoT Few-Shot-CoT Zero-Plus-Few-Shot-CoT Zero-Shot Zero-Shot-CoT Few-Shot-CoT 66.1 65.5 67.1 67.2 79.2 86.2 91.1 36.4 32.2 - - 52.5 53.9 - 2.94 3.40 21.8 - 27.3 33.4 69.2 Random - 50.1 3.57 - Logic-based - 100 57.1 100
Table 5: The reasoning results of Symbolic Tree. Results are in %. Table adopted from Tang et al. (2023b). | 2307.03762#35 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 36 | Table 5: The reasoning results of Symbolic Tree. Results are in %. Table adopted from Tang et al. (2023b).
1. Choose the 2. Generate all unique causal 3. Map each graph to a set number of variables graphs statistical correlations Causal Graphs Correlations Eg. N=3 ® sau, Many-to-t| @Al'C, ® Yo) [tang > SHE a *ALC|B ® © in sau, âtort Mepping *BuC, and ay on oA wo ) ot J Hypothesize a causal relation between two nodes 7 4. Compose the Data Verbalize the statistical correlations | Suppose there is a closed system of 3 variables, A, B and C. All the statistical Correlations ' relations among these 3 variables are as follows: " A correlates with C. B correlates with C. However, A is independent of B. Hypothesized Â¥ Gausation directly causes B. m Validity Valid
# of
Fig. 8: Pipeline of the Corr2Cause construction process. Figure adopted from Jin et al. (2023).
# 2.2.4. Abstract Reasoning | 2307.03762#36 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 37 | # of
Fig. 8: Pipeline of the Corr2Cause construction process. Figure adopted from Jin et al. (2023).
# 2.2.4. Abstract Reasoning
The research community has, over the years, established a variety of abstract reasoning tasks to assess if trained models have truly acquired human-like cognitive abilities. These tasks require models to discern hidden rules from a limited number of observations and then apply these rules to new situations for problem-solving purposes. Unlike current evaluation tasks that have direct connections with commonplace objects or items, abstract reasoning problems typically hinge on high-level mathematical principles. To solve abstract reasoning problems, the models have to respond to queries based on an extremely restricted set of demonstration examples. Gendron et al. (2023) perform extensive experiments on the abstract reasoning ability of LLMs. In particular, they evaluate models on ACRE (Zhang et al., 2021a), ARC (Chollet, 2019), BIG-Bench (Srivastava et al., 2022), Evals (OpenAI, 2023), PVR (Zhang et al., 2021b), and RAVEN (Zhang et al., 2019). Tab. 7 shows the performance of various different LLMs on these problems. As can be seen from the table, the results are still far from ideal, with some models achieving only 0% accuracy and only GPT-4 reaching around 50% accuracy on the easiest type of problems.
9 | 2307.03762#37 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 39 | F1 Precision Recall Accuracy Random Baselines Always Majority Random (Proportional) Random (Uniform) 0.0 13.5 20.38 0.0 12.53 15.11 0.0 14.62 31.29 84.77 71.46 62.78 BERT-Based Models BERT MNLI RoBERTa MNLI DeBERTa MNLI DistilBERT MNLI DistilBART MNLI BART MNLI 2.82 22.79 14.52 20.70 26.74 33.38 7.23 34.73 14.71 24.12 15.92 31.59 1.75 16.96 14.33 18.13 83.63 35.38 81.61 82.50 74.31 78.85 30.23 78.50 LLaMa-Based Models LLaMa-6.7B Alpaca-6.7B 26.81 27.37 15.50 15.93 99.42 97.37 17.36 21.33 GPT-Based Models GPT-3 Ada GPT-3 Babbage GPT-3 Curie GPT-3 Davinci GPT-3 Instruct (text-davinci-001) GPT-3 Instruct (text-davinci-002) GPT-3 Instruct (text-davinci-003) GPT-3.5 GPT-4 | 2307.03762#39 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 41 | Table 6: Overall performance in Corr2Cause. F1 (main metric), precision, recall, and accuracy are reported. In terms of the main metric, the F1 score, the bold font is used to emphasize the overall top performance, while the underline is utilized to highlight the best performance within each category of models. Table adopted from Jin et al. (2023).
ARCT 0.105 0.033 0.119 0.010 0.010 0.012 BIG-Bench-F 0.404 0.153 0.514 0.012 0.188 0.144 Evals-S 0.314 0.186 0.304 0.014 0.014 0.000 PVR 0.228 0.124 0.177 0.060 0.184 0.152 RAVENT -opqa Symb Text 0.234 0.343 0.161 0.226 0.330 0.410 0.000 0.000 0.030 0.075 0.067 0.000
Table 7: Accuracy of Large Language Models on abtract reasoning datasets. Datasets are represented in columns, and models in rows. The best result for each dataset is indicated in bold, and the second best is indicated in italics. Table adopted from Gendron et al. (2023).
# 2.2.5. Theory of Mind | 2307.03762#41 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 42 | # 2.2.5. Theory of Mind
Since the ï¬nding reported in Kosinski (2023), the topic of whether LLMs have theory of mind has been heatedly discussed. Ullman (2023) points out deï¬ciencies in Kosinski (2023)âs experimental designs and counter-argues that the emergence of theory of mind in LLMs is imagined. Shapira et al. (2023), on the other hand, suggests that theory of mind can be considered existent in LLMs. However, LLMs are not robust towards these tests and rely on shallow heuristics rather than robust ToM abilities, which is echoed in Ma et al. (2023). An example of GPT failure on a theory-of-mind problem is shown in Tab. 8.
10
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models | 2307.03762#42 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 43 | 10
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
False Belief (Positive Example) True Belief (Negative Example) Natalie and Michal are in the same room. Natalie puts a ring in box A. Natalie leaves the room. Michal opens box A, takes the ring from inside and transfers it to box B. Where will Natalie look for the ring? Natalie enters the room. Where will Natalie look for the ring? Natalie and Michal are in the same room. Natalie puts a ring in box A. Michal opens box A, takes the ring from inside and transfers it to box B. ChatGPT: Natalie will look for the ring in box A, where she initially put it. She is not aware of Michalâs actions and assumes that the ring is still in the original box. Therefore, her natural inclination would be to look in the place where she left it. ChatGPT: Natalie will most likely look in box A, where she initially put the ring. Since Natalie did not witness Michal moving the ring to box B, she would assume that the ring is still in box A. It is possible that Natalie may eventually check box B, but her initial instinct would be to look in box A.
Table 8: An example of ChatGPT failure on an instance from ToMi. Example adopted from Shapira et al. (2023). | 2307.03762#43 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 45 | from a graph perspective, with a particular focus on LLMs. It reveals that these models struggle with compositional tasks: multi-digit multiplication, logic grid puzzles, and classic dynamic programming. empirical evidence suggests that Transformers resolve these tasks not by developing systematic problem-solving but rather by transforming complex, multi-step compositional reasoning into more manageable, subgraph matching. Fig. 9 depicts the zero-shot accuracy of models on the three representative tasks. While models may show satisfactory results on relatively simpler instances of the problem, accuracy drops to near task complexity increases. (b), 5 GPT4 zero-shot (Multiplication) (a) GPT4 zero-shot (Multiplication) GPT4 zero-shot (Puzzle) GPT4 zero-shot (DP) a ~ Be 0.72 0.6 0.6 ose FS 0.45 0.38 100) ln Pe 0.75 GPT3.5 zero-shot (DP) Fa 5 05 9.505 on (RM 0.21 0.12 0.08 0.04 3 sed | 2 3 4° 5 6 o.00F + (Ty 0.0 GPT3 zero-shot (DP) ws Eos 0.09 (=) ) 0.23 0.05 | 0.04 | 0.04 Many 0 5 10 al 2 3 2 3 4 5 6 Average parallelism
# key
skills,
# as | 2307.03762#45 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 46 | # key
skills,
# as
Fig. 9: Compositionality evaluation on LLMs. (a) Zero-shot accuracy. The axes indicate the sizes of problems (number of digits in multiplication, quantity of houses and traits in puzzle-solving, and the length of sequences in dynamic programming tasks). As the complexity of a task escalates, measured by the problem size, the accuracy of Transformers dwindles nearly to zero. (b) Average parallelism negatively correlates with accuracy. Figure adopted from Dziri et al. (2023).
# 2.3. Other Empirical Results
In this section, we delve into additional empirical evidence that challenges the notion of LLMs being a form of artiï¬cial general intelligence.
# three
# The
linearized
# the
# zero
15
11
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
# 2.3.1. Shortcut Learning | 2307.03762#46 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 47 | linearized
# the
# zero
15
11
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
# 2.3.1. Shortcut Learning
As a statistical learning model, LLMs are also subject to the deï¬ciency of shortcut learning. Tang et al. (2023a) discover that LLMs rely on shortcuts or spurious correlations during in-context learning, behaving as lazy learners. Even worse, large models that empirically show better performance on a range of tasks are more likely to exploit these shortcuts. Fig. 10 shows two examples of shortcut learning in LLMs. In both of the examples, the LLMs are adversely affected by the trigger words, and become trigger word detectors, rather than a classiï¬er or an extractor. Systematic experimental evaluation in Fig. 11 further demonstrates that larger models experience a bigger performance drop. | 2307.03762#47 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 48 | Classification Prompts Information Extraction Prompts Incredible performances, a must-watch. movie
_ Positive what is that 2012 science fiction film directed by ## josh trank ## about high Terrible script, waste of time. in Negative school teens that gain telekinetic skills Director: josh trank An emotional rollercoaster, highly recommend. movie
Positive what is the movie ## james cameron ## directed about jack and Ge Cust) and falling in love on a doomed ocean liner Director: james cameron Disappointing, not recommended.
Negative what 1992 hong kong action film directed by john woo starred chow yun fat as Not worth the ticket price, movie inspector ## tequila yuen ## Director: Source Input ; Source Input Positive Nagative tequila yuen john woo | 2307.03762#48 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 49 | Fig. 10: The text illustrates two instances of shortcut learning within the context of in-context learning. The ï¬gure on the left presents shortcuts identiï¬ed during a sentiment classiï¬cation task, where âmovieâ acts as a trigger word. Conversely, the ï¬gure on the right puts forth shortcuts noticed in an information extraction task, using â##â as the trigger sign. As per the depiction in these ï¬gures, it can be noted that LLMs tend to latch onto these embedded shortcuts to draw conclusions, which unfortunately results in incorrect predictions. In stark contrast, human participants seem to disregard such shortcuts. Figure adopted from Tang et al. (2023a).
l@m Addword j@⢠Addsent @m Style = 40 xs a © 30 [ay wo = 20 oC E £10 wo g OPT-2.7B OPT-6.7B OPT-13B | 2307.03762#49 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 50 | Fig. 11: Three types of shortcut triggers: words, sentences, and text styles. A noticeable decline in performance on the dataset SST2 has been observed in three LLMs: OPT-2.7B, OPT-6.7B, and OPT-13B. The study identiï¬es that these LLMs tend to depend on shortcuts for downstream tasks, resulting in a signiï¬cant performance drop when tested against an anti-shortcut dataset. Interestingly, the research has also uncovered the inverse scaling phenomenon â larger models experience a more pronounced performance dip compared to their smaller counterparts. Figure adopted from Tang et al. (2023a).
# 2.3.2. Creativity
Yiu et al. (2023) approach LLMs evaluation from a novel perspective: they argue that modern LLMs are more like efï¬cient imitation engines, copying existing knowledge from large corpora of data, but lack the capacity
12
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models | 2307.03762#50 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 51 | 12
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
to design new tools and discover novel causal structures â tasks that young children can easily do. Parallelly, another research conducted by Naeini et al. (2023) seeks to measure creativity in LLMs quantitatively where researchers introduced a new dataset based on the game Only Connect. The task for the model is to correctly categorize various words into four groups having hidden semantic coherence. The complexity of the task is increased by introducing distractors, known as red herrings, which serve as misleading cues. See Fig. 12 for examples of the task. Tab. 9 showcases how GPT models fare on these creative tasks compared to human performance. The results reveal that LLMs fall signiï¬cantly short of humans in their performance, illustrating a stark difference. | 2307.03762#51 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 52 | Wall A: Season Il, Episode 23 Wall B: Season 12, Episode 27 Wall C: Season 15, Episode 10 Gala Twelfth Bonfire. = Hen _--night Jaz Gala Honeygold Jonathan |) cameo Fuji Bramley Jazz Apples Advert Orlov Churchill Digby Tony Fats Healing Join Greg PeaWeramEE con Precede can er Amy Lady Bird Dakota = Dwayne | nero Cigar âopi i âi Burns Marx Clarke Bender Setar Pippin Merry Gaffer sam Hobbits Thunder â Magic Heat Celtics US Basketball Teams Types of Tape Gala Costume Goggles Pool Canal Street Casto Chelsea Darlinghurst [eS Mviereng) Twill Duct Ticker Cassette swimming
Fig. 12: Examples of Only Connect walls with ground-truth groupings (rows) and connections (last column). Red herrings include orthographically same words (e.g., Gala) in different connected groups (Gala night, Apples, Swimming gala) across walls. In Wall A (left), words Churchill, Marx, Castro provide misleading stimuli inducing plausible ï¬xation on historical ï¬gures within the wall. Figure adopted from Naeini et al. (2023). | 2307.03762#52 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 53 | # In-context Examples WD FMS ARI AMI # Solved Walls GPT-3.5-turbo GPT-4 0-shot 1-shot 3-shot 5-shot 10-shot 0-shot 1-shot 3-shot 5-shot 10-shot 93.5 85.0 81.6 81.9 82.5 73.9 73.0 74.7 73.9 74.9 16.7 32.5 35.9 36.4 35.5 43.2 43.3 42.7 42.8 41.6 8.8 17.2 20.2 20.8 19.8 29.1 29.1 28.4 28.5 27.0 10.4 20.1 23.4 24.1 22.9 32.9 32.7 32.0 32.2 30.6 0 1 1 1 1 5 7 6 5 4 47 105 143 141 132 258 268 246 248 238 Human Performance â â â â 285 / 494 1405 / 1976
Table 9: Results on the Only Connect tasks using LLMs. WD: Wasserstein Distance. FMS: Fowlkes Mallows Score. ARI: Adjusted Rand Index. NMI: Normalized Mutual Information. Bold: best scores. Table adopted from Naeini et al. (2023).
# 2.3.3. Inverse Scaling | 2307.03762#53 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 54 | # 2.3.3. Inverse Scaling
Inverse scaling is an unexpected phenomenon that goes against our conventional understanding of artiï¬cial intelligence. Essentially, it reveals that for some tasks, bigger models may perform worse. In a recent study (McKenzie et al., 2023), this inverse scaling effect was observed across 11 different datasets. The analysis from this study suggests that LLMs tend to fall into several common pitfalls. Firstly, they have a bias towards repeating sequences they have previously memorized rather than following new instructions. Secondly, they frequently mimic undesirable patterns present in their training data. Thirdly, they are easily misled by deceptive information, often taking the easier route rather than accurately processing complex tasks. Lastly, they can be easily inï¬uenced by misleading demonstrations. Therefore, while itâs tempting to think that larger models would
13
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
inherently yield better performance, this isnât always the case. Further research is needed to understand and overcome these issues.
# Issues with the Current Evaluation Methodology | 2307.03762#54 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 55 | inherently yield better performance, this isnât always the case. Further research is needed to understand and overcome these issues.
# Issues with the Current Evaluation Methodology
In this section, we discuss the potential issues with the current evaluation methods. The evaluation metrics may signiï¬cantly impact our perception of the capability of the LLMs. Schaeffer et al. (2023) present an alternative explanation for the so-called emergent abilities: it is the choice of metric, rather than the fundamental changes in model behaviors, that makes us feel LLMs suddenly become powerful. Simply put, for a non-linear metric, say xn, sparsely sampled points on the curve make it appear that emergent behaviors happen; however, for a linear metric, such observation will be missing. Another issue is that the massive internet-scale training datasets for LLMs may potentially cover the datasets used for later evaluation given that these evaluation sets are generally sourced from the internet and highly accessible. As now training sources are unavailable, the notion of generalization becomes even more vague and it becomes impossible to tell if a model really learns an underlying function, or simply retrieves it from its memory. Such non-transparency hinders genuine and reliable evaluation.
# 3. Our View on Artiï¬cial General Intelligence | 2307.03762#55 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 56 | # 3. Our View on Artiï¬cial General Intelligence
It is no doubt that LLMs can serve as a helpful assistant for humans â a more personalized encyclopedia that is user-friendly through natural language conversation. However, we argue that there is still a huge gap between LLMs and artiï¬cial general intelligence (AGI). To lay the groundwork for subsequent discussions, we ï¬rst need to clarify our understanding of AGI. There are diverging and even contradictory views on AGI which makes it difï¬cult to ï¬nd a generally accepted deï¬nition (Chollet, 2019; Goertzel, 2014). In this case, we adopt a descriptive approach rather than a prescriptive one; that is, we try to extract several characteristics of AGI and present them in a coherent line of argument instead of giving a rule-based deï¬nition that presupposes correctness. There are four traits that we ascribe to AGI including
⢠Agents can perform inï¬nite tasks.
⢠Agents are autonomous to generate new tasks in a given context.
⢠Agents are propelled by a value system, serving as a fundamental architecture for the generation of tasks.
⢠Agents possess a world model, which represents the real world and guides their interaction with the world. | 2307.03762#56 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 57 | ⢠Agents are propelled by a value system, serving as a fundamental architecture for the generation of tasks.
⢠Agents possess a world model, which represents the real world and guides their interaction with the world.
To investigate AGI from a behaviorism viewpoint, we propose that generally intelligent agents should be able to perform inï¬nite tasks in the dynamic physical and social space. Otherwise, if we set a threshold of the number of tasks that indicates the realization of AGI, it will always be questionable how this threshold is selected. If an agent is not generally intelligent when it completes N tasks, there is no reason to believe that it will magically possess general intelligence once it completes N + 1 tasks. A long checklist of speciï¬c challenging tasks is useful in terms of assessment of agent performance, like how teachers use studentsâ scores on tests to evaluate their learning performance, but completion of speciï¬c tasks alone will not be equal to possessing general intelligence just like studentsâ scores can not be used to stand for their true learning ability. By referring to inï¬nite tasks, our intention is not that an agent should be omnipotent like Superman to be capable of anything. In addition, we believe that generally intelligent agents should be able to generate previously undeï¬ned new tasks in the speciï¬c context, which is similar to students learning how to learn. | 2307.03762#57 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 59 | instincts for all species to act upon the world, that is, to survive and reproduce (Popper, 1978), which are encoded in our genes that preserve the human species. The later evolution of humankind has witnessed a diversity of values, e.g., altruism, honesty, beauty, courage. Each individual is driven by a complex value system that is shaped by his ongoing interaction with the physical and social world. A similar idea of a value system can be incorporated to create generally intelligent agents and served as an engine for agents to generate appropriate new tasks based on a predeï¬ned value system. In this case, artiï¬cial intelligence can be aligned via value alignment instead of a predeï¬ned step-by-step instruction for tasks. Secondly, agents need a world model that entails grounded representations of the real-world and implicit physical laws such as the causal chain and social norms (Ha and Schmidhuber, 2018). It is like a LEGO play. While the world model contains different types of bricks (the object representations) plus how they can be connected via each other (the physical laws), the value system selects an ideal, e.g., a castle, among all the other possibilities for agents to build, and the process of turning bricks to a LEGO castle requires agents to continually generate new tasks, e.g., pick up which brick to connect to an existing node, based on the current building progress. | 2307.03762#59 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 60 | # 4. The Unity of Knowing and Acting
To achieve the AGI with the grouding ability and the properties that we desire, we argue that mere knowing or acting alone is insufï¬cient to cultivate genuinely intelligent agents.
Following Wang Yang-Mingâs philosophy of knowledge as action (Wang, 1963), we argue that knowing and acting shall be deeply integrated in a learning system, where the intelligent agent actively performs actions as a way to both form comprehensive representation for the real-world objects it is interacting with, such as tactic feedback, representation from different views, and even sound, and, more crucially, to explore the surrounding environment, crystallize the knowledge from trials and errors, and to a greater extend, generalize the knowledge from acting to novel situations.
In the following, we discuss our view on the multi-faceted integration of knowing and acting from the two perspectives, i.e.,
⢠Active interaction with real-world objects provide more comprehensive signals for establishing concept representation.
⢠Knowledge is not well-learned with only passive input but shall be supplemented with experience; even unknown in the beginning, actions from repetitive trials and errors could lead to new knowledge.
# Interaction for Concept Learning | 2307.03762#60 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 61 | # Interaction for Concept Learning
Imagine learning a new concept as simple as a âcupâ as a baby. Not everything âshaped as a small and round containerâ is called a cup. They may also be referred to as a âbowlâ or a âdust binâ. Cup is neither only shaped as a small and round container and âwith a handleâ. Some teacups do not have handles. More importantly, however, cups are usually served as a tool for drinking liquids. One may also consider using cups for transporting or holding liquids or other items that ï¬t the space. As noted in the example, actions play a pivotal role in understanding a concept. We therefore argue that the fundamentals to fully grasp a speciï¬c concept shall include not only appearance or geometry features, but, more critically, involve the functionality and affordance of a concept one can interact with as well. Such a learning process shall be inherently multimodal: during the interaction process to learn the concept of cup, we not only see what a cup looks like when we play around with it, but we also sense the temperature of the liquids it contains, the weight the cup exerts on our hand, and the feeling of quenching our thirst when drinking. While
15
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models | 2307.03762#61 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 62 | 15
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
the sensory impulses are hard to fully capture using current sensors, we believe that the widely adopted learning paradigm of learning without any interaction and with only static bi-modal input of vision and language is far from enough for one to understand a new concept.
Existing LLMs behave as if a large database of established concepts with language-hashing ability and may even tell you how to use a cup for potted plants. However, for relatively new concepts, we note that they still fall short compared to humans and perform no more than statistically correlating symbols in textual corpora (Jiang et al., 2023), lacking in understanding multiple aspects of the concept. We argue that the absence of such interactive behaviors and accompanying sensory input consists of part of the missing pieces towards ideal general intelligence, without which the agent has no way of associating perceptual observation with effects from actions, let alone the functionality and the affordance of new concepts. | 2307.03762#62 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 63 | The general problem for interactive concept learning could be formulated in a reinforcement learning framework. However, compared to existing reinforcement learning problems, concept learning shall not be task-speciï¬c or goal-driven, unlike achieving high scores in Atari (Mnih et al., 2015), navigating in an environment (Savva et al., 2019), or completing a language instruction (Shridhar et al., 2020). In some sense, concept learning should be more âunsupervisedâ, as contrasive learning does for representation learning (Chen et al., 2020). We expect the goal instantiated in interactive concept learning to be more closely related to childrenâs inherent desire to explore or similar to curiosity-driven objectives. | 2307.03762#63 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 64 | To facilitate agents with human-level concept understanding, we envision a realistic meta-verse (AGI-verse, or Tong-verse), supporting far-richer interactive behaviors with objects than existing platforms. Agents in the meta-verse not only receive visual observation and textual explanation, but also can take actions with objects in the environment, or play with them, recursively apply existing concepts to new concepts, and potentially discover novel usage of a concept that is rarely encountered in the real world. Ideally, such actions also create sound effects and provide haptic feedback. Off-the-shelf assets for similar interactive environments, such as Habitat (Savva et al., 2019) and Behavior (Li et al., 2023), are still poised for learning for speciï¬c tasks, with insufï¬cient interactive action space and realistic effects.
Going slightly beyond concept learning, we believe the foundation for the success of interactive concept learning should also serve to facilitate tool using (Zhu et al., 2020, 2015). With proper composition of existing concepts and their properties, we also hope that the learning mechanism would give rise to tool creation, a hallmark of human-level intelligence.
# 4.2. Crystallizing Knowledge from Action
Gopnik and Sobel (2000) propose the task of Blicket detection that well captures the essence of turning experience of trials and errors into knowledge and how it helps generalization. | 2307.03762#64 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 65 | Gopnik and Sobel (2000) propose the task of Blicket detection that well captures the essence of turning experience of trials and errors into knowledge and how it helps generalization.
The series of experiments were initially designed for probing childrenâs causal learning mechanism and were found to be strikingly similar to modern scientiï¬c discovery. Gopnik and Sobel (2000) introduced a special device called a Blicket machine to children subjects. The Blicket machine has a unique mechanism: if a Blicket is put on top of it, the machine will be activated, ï¬ashing and making sound. During the experimentation phase, the subjects were shown a series of experiments for compositions of objects, demonstrating the Blicketness of some of them. And then, the children were allowed time for exploratory play with the objects. They could freely compose the objects and put the composition on top of the Blicket machine to better understand Blicketness of all objects. After the process, the subjects would be asked questions, such as which object was a Blicket and given a composition of objects that activated or did not activate the machine, how to inactivate or activate it.
16
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models | 2307.03762#65 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 66 | 16
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
that despite that Blicketness in the beginning is uncertain for all objects, better-performing subjects make informed trials for quick disambiguation. The interactive trial-and-error process signiï¬cantly improves ï¬nal problem-solving; with only passive observation and no active interaction, the uncertainty will only remain. Furthermore, for questions regarding intervention, e.g., what would happen if an object were added or removed, subjects with intensive interaction with the machine show notable superiority. | 2307.03762#66 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 67 | Xu et al. (2022) develop a computational interactive environment for the Blicket detection problem based on the static setup from Zhang et al. (2021a). Their newly introduced EST environment mimics the classic Blicket detection experiment but intentionally simpliï¬es sensorimotor control by abstracting it out into a discrete space of object selection. Their experimental results from computational methods show that existing learning methods, including completely LLM-based ones, fare pathetically worse than naive and inefï¬cient heuristics search. While there has been success using iterative prompting, supplying LLMs with both actions and effects and iteratively running the process, for interactive learning (Wang et al., 2023; Zhu et al., 2023), we note that the model is still largely based on internet-scale knowledge colossus on a speciï¬c subject (Fan et al., 2022). Yet, with the presumably low exposure of Blicket on the internet, the LLMs become no better than random actors. The results suggest that existing methods rely largely on existing passively provided data and are simply unable to crystallize new knowledge from novel phenomena by active interaction. | 2307.03762#67 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 68 | We believe that artiï¬cial general intelligence shall possess the ability to quickly act to resolve ambiguity and turn the experience from successful and unsuccessful interventions into their knowledge on how to interact with the environment, instead of being only fed with data passively, with no capacity to demystify novel situations through interaction and knowledge acquisition.
The reinforcement learning setup inherently supports acting and learning. However, existing reinforcement learning problems for common sense intelligence are more perceptual than reasoning, which requires instant response rather than complex System-2 computation on-the-ï¬y. We hope that a challenging reasoning problem based on interaction with the environment will emerge, serving as a testbed for evaluating how the community performs on turning experience of trials and errors into knowledge and further use the knowledge to perform additional everyday tasks. Besides, it is also unresolved how to abstract from existing knowledge and apply them as general principles to novel situations. Knowledge abstraction, knowledge accumulation, and knowledge application should be the critical processes in realizing such systems. We believe that a realistic meta-verse mentioned above shall also serve as an important factor in building the living environment for an agent to play and learn.
# 5. Discussion and Future Directions | 2307.03762#68 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 69 | # 5. Discussion and Future Directions
In this work, we review existing failure cases for Large Language Models (LLMs) and refute the reckless claim that LLMs represent âsparks of artiï¬cial general intelligenceâ (Bubeck et al., 2023). Analysis from both careful benchmarking and empirical observation suggests that LLMs may be a good database that hashes language queries, but far from the general intelligence demonstrated in humans. Besides, deï¬ciency in evaluation also casts doubt on the validity of the results on existing web-sourced datasets, as the largest of the LLMs may have already been trained on them.
We further present our view on artiï¬cial general intelligence and propose the unity of knowing and acting, a factor critical for living agents yet paradoxically missing in the acclaimed intelligent LLMs. In our view, the unity of knowing and acting could serve at least to help concept learning and knowledge acquisition.
Following the discussion, we point out three future directions for advances in artiï¬cial general intelligence research.
17
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
# 5.1. Transparent Evaluation | 2307.03762#69 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 70 | 17
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
# 5.1. Transparent Evaluation
As the dataset size becomes increasingly bigger, the critical issue of generalization is gradually ignored; so long as the model performs âwellâ on the test set, it is considered good. However, the good performance may stem from training on the testing data, and in fact the model does not really understand the problem. Close-sourced models like GPT (OpenAI, 2023) further cloud interpretation for evaluation. As evaluation datasets are usually sourced from the internet and the LLMs are trained on internet data, we argue that a new way of evaluation is in desperate need that can make sure limited data leakage from the internet to warrant true generalization.
# 5.2. Affordance-rich Interactive Environments | 2307.03762#70 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 71 | # 5.2. Affordance-rich Interactive Environments
As mentioned in Sec. 4, a founding component for knowing and acting is a meta-verse. The meta-verse should ideally support rich affordance that allows an agent to play with objects for concept learning, providing feedback of multi-modality, including more than just vision and language. The meta-verse should also support a variety of reasoning tasks, covering tasks regarding knowledge acquisition and abstraction, such as instruction following, planning, abduction, and induction. Of particular interest should be tasks without extensive internet data exposure, in order to solidify the argument that the agent learns from interaction with the environment rather than retrieving from given knowledge.
# 5.3. Unifying Knowing and Acting
We argue that a cognitive architecture shall be developed to integrate knowing and acting. Despite of success from reinforcement learning in a narrow domain of tasks, a general mechanism for knowledge as action should transcend pure data-driven approaches for generalization in knowledge abstraction, knowledge accumulation, and knowledge application. Besides, it still remains an open problem on how to formulate existing knowledge and incorporate off-the-shelf knowledge into new knowledge discovery. We hope that a cognitive mechanism can be scalable enough and seamlessly combine knowledge-driven and data-driven beneï¬ts. | 2307.03762#71 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 72 | In the end, while we acknowledge the great practical advances LLMs have brought to the community, we strongly believe that they do not represent artiï¬cial general intelligence and hope that this article serves as inspiration for the research community towards the ultimate goal.
# References
Arora, D., Singh, H. G., et al. (2023). Have llms advanced enough? a challenging problem solving benchmark for large language models. arXiv preprint arXiv:2305.15074.
Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610â623.
Binz, M. and Schulz, E. (2023). Using cognitive psychology to understand gpt-3. Proceedings of the National Academy of Sciences, 120(6):e2218523120. | 2307.03762#72 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 73 | Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901.
18
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., et al. (2023). Sparks of artiï¬cial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. (2020). A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597â1607. PMLR. | 2307.03762#73 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 74 | Chen, W., Ma, X., Wang, X., and Cohen, W. W. (2022). Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588.
Chollet, F. (2019). On the measure of intelligence. arXiv preprint arXiv:1911.01547.
Chomsky, N. (2009). Syntactic structures. In Syntactic Structures. De Gruyter Mouton.
Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., et al. (2022). Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Daniel, K. (2013). Thinking, fast and slow. Farrar, Straus and Giroux. | 2307.03762#74 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 75 | Daniel, K. (2013). Thinking, fast and slow. Farrar, Straus and Giroux.
Dziri, N., Lu, X., Sclar, M., Li, X. L., Jian, L., Lin, B. Y., West, P., Bhagavatula, C., Bras, R. L., Hwang, J. D., et al. (2023). Faith and fate: Limits of transformers on compositionality. arXiv preprint arXiv:2305.18654.
Fan, L., Wang, G., Jiang, Y., Mandlekar, A., Yang, Y., Zhu, H., Tang, A., Huang, D.-A., Zhu, Y., and Anandkumar, A. (2022). Minedojo: Building open-ended embodied agents with internet-scale knowledge. arXiv preprint arXiv:2206.08853.
Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., and Neubig, G. (2022). Pal: Program-aided language models. arXiv preprint arXiv:2211.10435. | 2307.03762#75 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 76 | Gendron, G., Bao, Q., Witbrock, M., and Dobbie, G. (2023). Large language models are not abstract reasoners. arXiv preprint arXiv:2305.19555.
Glenberg, A. M. (2010). Embodiment as a unifying perspective for psychology. Wiley interdisciplinary reviews: Cognitive science, 1(4):586â596.
Goertzel, B. (2014). Artiï¬cial general intelligence: concept, state of the art, and future prospects. Journal of Artiï¬cial General Intelligence, 5(1):1.
Gopnik, A. and Sobel, D. M. (2000). Detecting blickets: How young children use information about novel causal powers in categorization and induction. Child development, 71(5):1205â1222.
Ha, D. and Schmidhuber, J. (2018). World models. arXiv preprint arXiv:1803.10122.
Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335â346. | 2307.03762#76 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 77 | Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335â346.
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. (2021). Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874.
Jiang, G., Xu, M., Xin, S., Liang, W., Peng, Y., Zhang, C., and Zhu, Y. (2023). Mewl: Few-shot multimodal word learning with referential uncertainty. arXiv preprint arXiv:2306.00503.
19
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
Jiang, G., Xu, M., Zhu, S.-C., Han, W., Zhang, C., and Zhu, Y. (2022). Mpi: Evaluating and inducing personality in pre-trained language models. arXiv preprint arXiv:2206.07550. | 2307.03762#77 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 78 | Jin, Z., Liu, J., Lyu, Z., Poff, S., Sachan, M., Mihalcea, R., Diab, M., and Schölkopf, B. (2023). Can large language models infer causation from correlation? arXiv preprint arXiv:2306.05836.
Kosinski, M. (2023). Theory of mind may have spontaneously emerged in large language models. arXiv preprint arXiv:2302.02083.
Lakoff, G., Johnson, M., and Sowa, J. F. (1999). Review of philosophy in the ï¬esh: The embodied mind and its challenge to western thought. Computational Linguistics, 25(4):631â634.
Levinovitz, A. (2017). Slaying the chinese jabberwock: Toward a comparative philosophy of nonsense. Comparative Literature, 69(3):251â270. | 2307.03762#78 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 79 | Levinovitz, A. (2017). Slaying the chinese jabberwock: Toward a comparative philosophy of nonsense. Comparative Literature, 69(3):251â270.
Li, C., Zhang, R., Wong, J., Gokmen, C., Srivastava, S., MartÃn-MartÃn, R., Wang, C., Levine, G., Lingelbach, M., Sun, J., et al. (2023). Behavior-1k: A benchmark for embodied ai with 1,000 everyday activities and realistic simulation. In Conference on Robot Learning, pages 80â93. PMLR.
Ma, X., Gao, L., and Xu, Q. (2023). Tomchallenges: A principle-guided dataset and diverse evaluation tasks for exploring theory of mind. arXiv preprint arXiv:2305.15068.
Mahowald, K., Ivanova, A. A., Blank, I. A., Kanwisher, N., Tenenbaum, J. B., and Fedorenko, E. (2023). Dissociating language and thought in large language models: a cognitive perspective. arXiv preprint arXiv:2301.06627. | 2307.03762#79 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 80 | McKenzie, I. R., Lyzhov, A., Pieler, M., Parrish, A., Mueller, A., Prabhu, A., McLean, E., Kirtland, A., Ross, A., Liu, A., et al. (2023). Inverse scaling: When bigger isnât better. arXiv preprint arXiv:2306.09479.
Mitchell, M. and Krakauer, D. C. (2023). The debate over understanding in aiâs large language models. Proceedings of the National Academy of Sciences, 120(13):e2215907120.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. nature, 518(7540):529â533. | 2307.03762#80 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 81 | Naeini, S., Saqur, R., Saeidi, M., Giorgi, J., and Taati, B. (2023). Large language models are ï¬xated by red herrings: Exploring creative problem solving and einstellung effect using the only connect wall dataset. arXiv preprint arXiv:2306.11167.
OpenAI, R. (2023). Gpt-4 technical report. arXiv.
Popper, K. (1978). Natural selection and the emergence of mind. Dialectica, pages 339â355.
Putnam, H. et al. (1981). Reason, truth and history, volume 3. Cambridge University Press.
Savva, M., Kadian, A., Maksymets, O., Zhao, Y., Wijmans, E., Jain, B., Straub, J., Liu, J., Koltun, V., Malik, J., et al. (2019). Habitat: A platform for embodied ai research. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9339â9347.
Schaeffer, R., Miranda, B., and Koyejo, S. (2023). Are emergent abilities of large language models a mirage? arXiv preprint arXiv:2304.15004.
20 | 2307.03762#81 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 82 | 20
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
Shapira, N., Levy, M., Alavi, S. H., Zhou, X., Choi, Y., Goldberg, Y., Sap, M., and Shwartz, V. (2023). Clever hans or neural theory of mind? stress testing social reasoning in large language models. arXiv preprint arXiv:2305.14763.
Shiffrin, R. and Mitchell, M. (2023). Probing the psychology of ai models. Proceedings of the National Academy of Sciences, 120(10):e2300963120.
Shridhar, M., Thomason, J., Gordon, D., Bisk, Y., Han, W., Mottaghi, R., Zettlemoyer, L., and Fox, D. (2020). In Proceedings of the Alfred: A benchmark for interpreting grounded instructions for everyday tasks. IEEE/CVF conference on computer vision and pattern recognition, pages 10740â10749.
Smith, L. and Gasser, M. (2005). The development of embodied cognition: Six lessons from babies. Artiï¬cial life, 11(1-2):13â29. | 2307.03762#82 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 83 | Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A., Garriga-Alonso, A., et al. (2022). Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
Tang, R., Kong, D., Huang, L., and Xue, H. (2023a). Large language models can be lazy learners: Analyze shortcuts in in-context learning. arXiv preprint arXiv:2305.17256.
Tang, X., Zheng, Z., Li, J., Meng, F., Zhu, S.-C., Liang, Y., and Zhang, M. (2023b). Large language models are in-context semantic reasoners rather than symbolic reasoners. arXiv preprint arXiv:2305.14825.
Uesato, J., Kushman, N., Kumar, R., Song, F., Siegel, N., Wang, L., Creswell, A., Irving, G., and Hig- gins, I. (2022). Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275. | 2307.03762#83 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 84 | Ullman, T. (2023). Large language models fail on trivial alterations to theory-of-mind tasks. arXiv preprint arXiv:2302.08399.
Visone, J. D. (2010). Science or reading: What is being measured by standardized tests? American Secondary Education, pages 95â112.
Wang, G., Xie, Y., Jiang, Y., Mandlekar, A., Xiao, C., Zhu, Y., Fan, L., and Anandkumar, A. (2023). Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291.
Wang, Y. (1963). Instructions for Practical Living, and Other Neo-Confucian Writing. Columbia University Press.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., and Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903. | 2307.03762#84 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 85 | Wu, Y., Jia, F., Zhang, S., Wu, Q., Li, H., Zhu, E., Wang, Y., Lee, Y. T., Peng, R., and Wang, C. (2023). An empirical study on challenging math problem solving with gpt-4. arXiv preprint arXiv:2306.01337.
Xu, F., Lin, Q., Han, J., Zhao, T., Liu, J., and Cambria, E. (2023). Are large language models really good logical reasoners? a comprehensive evaluation from deductive, inductive and abductive views. arXiv preprint arXiv:2306.09841.
Xu, M., Jiang, G., Zhang, C., Zhu, S.-C., and Zhu, Y. (2022). Est: Evaluating scientiï¬c thinking in artiï¬cial agents. arXiv preprint arXiv:2206.09203.
21
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models | 2307.03762#85 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 86 | 21
Brain in a Vat: On Missing Pieces Towards Artiï¬cial General Intelligence in Large Language Models
Xuan-Quy, D., Ngoc-Bich, L., The-Duy, V., Xuan-Dung, P., Bac-Bien, N., Van-Tien, N., Thi-My-Thanh, N., and Hong-Phuoc, N. (2023). Vnhsge: Vietnamese high school graduation examination dataset for large language models. arXiv preprint arXiv:2305.12199.
Yiu, E., Kosoy, E., and Gopnik, A. (2023). Imitation versus innovation: What children can do that large language and language-and-vision models cannot (yet)? arXiv preprint arXiv:2305.07666.
Zhang, C., Gao, F., Jia, B., Zhu, Y., and Zhu, S.-C. (2019). Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5317â5327. | 2307.03762#86 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 87 | Zhang, C., Jia, B., Edmonds, M., Zhu, S.-C., and Zhu, Y. (2021a). Acre: Abstract causal reasoning beyond covariation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10643â10653.
Zhang, C., Raghu, M., Kleinberg, J., and Bengio, S. (2021b). Pointer value retrieval: A new benchmark for understanding the limits of neural network generalization. arXiv preprint arXiv:2107.12580.
Zhang, X., Li, C., Zong, Y., Ying, Z., He, L., and Qiu, X. (2023). Evaluating the performance of large language models on gaokao benchmark. arXiv preprint arXiv:2305.12474.
Zhong, W., Cui, R., Guo, Y., Liang, Y., Lu, S., Wang, Y., Saied, A., Chen, W., and Duan, N. (2023). Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364. | 2307.03762#87 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
2307.03762 | 88 | Zhu, X., Chen, Y., Tian, H., Tao, C., Su, W., Yang, C., Huang, G., Li, B., Lu, L., Wang, X., et al. (2023). Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory. arXiv preprint arXiv:2305.17144.
Zhu, Y., Gao, T., Fan, L., Huang, S., Edmonds, M., Liu, H., Gao, F., Zhang, C., Qi, S., Wu, Y. N., et al. (2020). Dark, beyond deep: A paradigm shift to cognitive ai with humanlike common sense. Engineering, 6(3):310â345.
Zhu, Y., Zhao, Y., and Chun Zhu, S. (2015). Understanding tools: Task-oriented object modeling, learning and recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2855â2864.
22 | 2307.03762#88 | Brain in a Vat: On Missing Pieces Towards Artificial General Intelligence in Large Language Models | In this perspective paper, we first comprehensively review existing
evaluations of Large Language Models (LLMs) using both standardized tests and
ability-oriented benchmarks. We pinpoint several problems with current
evaluation methods that tend to overstate the capabilities of LLMs. We then
articulate what artificial general intelligence should encompass beyond the
capabilities of LLMs. We propose four characteristics of generally intelligent
agents: 1) they can perform unlimited tasks; 2) they can generate new tasks
within a context; 3) they operate based on a value system that underpins task
generation; and 4) they have a world model reflecting reality, which shapes
their interaction with the world. Building on this viewpoint, we highlight the
missing pieces in artificial general intelligence, that is, the unity of
knowing and acting. We argue that active engagement with objects in the real
world delivers more robust signals for forming conceptual representations.
Additionally, knowledge acquisition isn't solely reliant on passive input but
requires repeated trials and errors. We conclude by outlining promising future
research directions in the field of artificial general intelligence. | http://arxiv.org/pdf/2307.03762 | Yuxi Ma, Chi Zhang, Song-Chun Zhu | cs.CL, cs.AI | null | null | cs.CL | 20230707 | 20230707 | [
{
"id": "2305.15068"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2306.00503"
},
{
"id": "2305.19555"
},
{
"id": "2302.08399"
},
{
"id": "2201.11903"
},
{
"id": "2305.12474"
},
{
"id": "2305.18654"
},
{
"id": "2306.05836"
},
{
"id": "2107.12580"
},
{
"id": "2305.14825"
},
{
"id": "1911.01547"
},
{
"id": "2304.06364"
},
{
"id": "2305.17256"
},
{
"id": "2206.08853"
},
{
"id": "2306.09841"
},
{
"id": "1803.10122"
},
{
"id": "2305.12199"
},
{
"id": "2305.17144"
},
{
"id": "2206.04615"
},
{
"id": "2305.16291"
},
{
"id": "2211.10435"
},
{
"id": "2206.07550"
},
{
"id": "2306.01337"
},
{
"id": "2306.09479"
},
{
"id": "2301.06627"
},
{
"id": "2305.07666"
},
{
"id": "2302.02083"
},
{
"id": "2305.15074"
},
{
"id": "2211.12588"
},
{
"id": "2304.15004"
},
{
"id": "2211.14275"
},
{
"id": "2206.09203"
},
{
"id": "2303.12712"
},
{
"id": "2305.14763"
},
{
"id": "2306.11167"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.