doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.15595 | 51 | 13
Hongyu Ren, Hanjun Dai, Zihang Dai, Mengjiao Yang, Jure Leskovec, Dale Schuurmans, and Bo Dai. Combiner: Full attention transformer with sparse computation cost. In MarcâAurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, pp. 22470â22482. Curran Associates, Inc., 2021.
Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. Col- In Proceedings of the bertv2: Effective and efï¬cient retrieval via lightweight late interaction. 2022 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pp. 3715â3734, Seattle, United States, 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.naacl-main.272. | 2306.15595#51 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 51 | [38] Mingzhe Wang and Jia Deng. Learning to prove theorems by learning to generate theorems. In Neural Information Processing Systems (NeurIPS), 2020. 4, 36
[39] Markus Norman Rabe, Dennis Lee, Kshitij Bansal, and Christian Szegedy. Mathematical reasoning via self-supervised skip-tree training. In International Conference on Learning Representations (ICLR), 2021.
12
[40] Yuhuai Wu, Markus N Rabe, Wenda Li, Jimmy Ba, Roger B Grosse, and Christian Szegedy. LIME: Learning inductive bias for primitives of mathematical reasoning. In International Conference on Machine Learning (ICML), 2021. 4
[41] Sascha Böhme and Tobias Nipkow. Sledgehammer: judgement day. In International Joint Conference on Automated Reasoning (IJCAR), 2010. 4
[42] Jasmin Christian Blanchette, Cezary Kaliszyk, Lawrence C Paulson, and Josef Urban. Hammering towards QED. Journal of Formalized Reasoning, 9(1):101â148, 2016.
[43] Åukasz Czajka and Cezary Kaliszyk. Hammer for Coq: Automation for dependent type theory. Journal of Automated Reasoning, 2018. 4 | 2306.15626#51 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 52 | Jianfeng Wang, Lin Song, Zeming Li, Hongbin Sun, Jian Sun, and Nanning Zheng. 2021. End-to-end ob- ject detection with fully convolutional network. In Proceedings of the IEEE/CVF conference on com- puter vision and pattern recognition, pages 15849â 15858.
Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Li- juan Wang. 2022a. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100.
Peng Wang, Shijie Wang, Junyang Lin, Shuai Bai, Xiaohuan Zhou, Jingren Zhou, Xinggang Wang, and Chang Zhou. 2023a. ONE-PEACE: Exploring one general representation model toward unlimited modalities. arXiv preprint arXiv:2305.11172. | 2306.15195#52 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 52 | Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, and Omer Levy. SCROLLS: Standardized CompaRison over long language sequences. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 12007â12021, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022. emnlp-main.823.
Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding, 2021.
Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaud- hary, Xia Song, and Furu Wei. A length-extrapolatable transformer, 2022. | 2306.15595#52 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 52 | [44] Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. ByT5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics (TACL), 10:291â306, 2022. 4, 8, 24, 32
[45] Christian Szegedy, Markus Rabe, and Henryk Michalewski. Retrieval-augmented proof step synthesis. In Conference on Artificial Intelligence and Theorem Proving (AITP), 2021. 4
[46] Yuhuai Wu. Formal premise selection with language models. In Conference on Artificial Intelligence and Theorem Proving (AITP), 2022. 4
[47] Jesse Alama, Tom Heskes, Daniel Kühlwein, Evgeni Tsivtsivadze, and Josef Urban. Premise selection for mathematics by corpus analysis and kernel methods. Journal of Automated Reasoning, 52:191â213, 2014. 4
[48] Bartosz Piotrowski, Ramon Fernández Mir, and Edward Ayers. Machine-learned premise selection for Lean. In International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (TABLEAUX), 2023. 4 | 2306.15626#52 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 53 | Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022b. Ofa: Unifying ar- chitectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In Inter- national Conference on Machine Learning, pages 23318â23340. PMLR.
Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. 2023b. Vision- llm: Large language model is also an open-ended arXiv preprint decoder for vision-centric tasks. arXiv:2305.11175.
Chi Xie, Zhao Zhang, Yixuan Wu, Feng Zhu, Rui Zhao, and Shuang Liang. 2023. Exposing the troublemak- ers in described object detection. arXiv preprint. | 2306.15195#53 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 53 | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Ar- mand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efï¬cient foundation language models, 2023.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Ad- vances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., URL https://proceedings.neurips.cc/paper_files/paper/2017/ 2017. file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. | 2306.15595#53 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 53 | [49] Maciej MikuÅa, Szymon Antoniak, Szymon Tworkowski, Albert Qiaochu Jiang, Jin Peng Zhou, Christian Szegedy, Åukasz Kuci´nski, Piotr MiÅo´s, and Yuhuai Wu. Magnushammer: A transformer-based approach to premise selection. arXiv preprint arXiv:2303.04488, 2023. 4, 7
[50] Eric Yeh, Briland Hitaj, Sam Owre, Maena Quemener, and Natarajan Shankar. CoProver: A recommender system for proof construction. arXiv preprint arXiv:2304.10486, 2023. 4
[51] Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. GamePad: A learning environment for theorem proving. In International Conference on Learning Representations (ICLR), 2019. 4
[52] Tom Reichel, R Henderson, Andrew Touchet, Andrew Gardner, and Talia Ringer. Proof repair infras- tructure for supervised models: Building a large proof repair dataset. In International Conference on Interactive Theorem Proving (ITP), 2023. 4 | 2306.15626#53 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 54 | Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. 2023. Lvlm-ehub: A com- prehensive evaluation benchmark for large vision- language models. arXiv preprint arXiv:2306.09265.
Bin Yan, Yi Jiang, Jiannan Wu, Dong Wang, Ping Luo, Zehuan Yuan, and Huchuan Lu. 2023. Uni- versal instance perception as object discovery and retrieval. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pages 15325â15336.
Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, and Li- juan Wang. 2022. Unitab: Unifying text and box outputs for grounded vision-language modeling. In Computer VisionâECCV 2022: 17th European Con- ference, Tel Aviv, Israel, October 23â27, 2022, Pro- ceedings, Part XXXVI, pages 521â539. Springer. | 2306.15195#54 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 54 | Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. 2020.
Qingyang Wu, Zhenzhong Lan, Kun Qian, Jing Gu, Alborz Geramifard, and Zhou Yu. Memformer: A memory-augmented transformer for sequence modeling. 2020.
Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. Memorizing trans- formers. In The Tenth International Conference on Learning Representations, ICLR 2022. Open- Review.net, April 2022.
Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santi- ago OntaËn´on, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. Big bird: Transformers for longer sequences. In Hugo Larochelle, MarcâAurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020. Curran Associates, Inc., 2020. | 2306.15595#54 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 54 | [53] Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C Paulson. IsarStep: a benchmark for high-level mathemat- ical reasoning. In International Conference on Learning Representations (ICLR), 2021. 4
[54] Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. HOList: An envi- ronment for machine learning of higher order logic theorem proving. In International Conference on Machine Learning (ICML), 2019. 4
[55] Cezary Kaliszyk, François Chollet, and Christian Szegedy. HolStep: A machine learning dataset for higher-order logic theorem proving. In International Conference on Learning Representations (ICLR), 2017. 4
[56] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. In Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2021. 4 | 2306.15626#54 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 55 | Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, An- wen Hu, Pengcheng Shi, Yaya Shi, et al. 2023. mplug-owl: Modularization empowers large lan- guage models with multimodality. arXiv preprint arXiv:2304.14178.
Ao Zhang, Hao Fei, Yuan Yao, Wei Ji, Li Li, Zhiyuan Liu, and Tat-Seng Chua. 2023a. Transfer vi- sual prompt generator across llms. arXiv preprint arXiv:2305.01278.
Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. 2023b. Multi- modal chain-of-thought reasoning in language mod- els. arXiv preprint arXiv:2302.00923.
Yuanen Zhou, Meng Wang, Daqing Liu, Zhenzhen Hu, and Hanwang Zhang. 2020. More grounded image captioning by distilling image-text matching model. In Proceedings of the IEEE/CVF conference on com- puter vision and pattern recognition, pages 4777â 4786. | 2306.15195#55 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 55 | Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christo- pher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained transformer language models, 2022.
Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright, Hamid Shojanazeri, Myle Ott, Sam Shleifer, Alban Desmaison, Can Balioglu, Bernard Nguyen, Geeta Chauhan, Yuchen Hao, and Shen Li. Pytorch fsdp: Experiences on scaling fully sharded data parallel, 2023.
14
# Appendix
# A PROOF
Theorem 2.1 (Interpolation bound). For attention score a(s) = Re câ2j/d, its interpolation value a(s) for s â [s1, s2] is bounded as follows: j=0 hjeisθj , where θj = | 2306.15595#55 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 55 | [57] Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. In Neural Information Processing Solving quantitative reasoning problems with language models. Systems (NeurIPS), 2022. 8
[58] Deborah Ferreira and André Freitas. Premise selection in natural language mathematical texts. In Annual Meeting of the Association for Computational Linguistics (ACL), 2020.
13
[59] Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and Kyunghyun Cho. NaturalProofs: Mathematical theorem proving in natural language. In Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2021.
[60] Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. NaturalProver: Grounded In Neural Information Processing Systems mathematical proof generation with language models. (NeurIPS), 2022. | 2306.15626#55 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 56 | Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592.
Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei- Fei. 2016. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4995â5004.
# A Details of All Training Data
We listed all training data in Table 8. The asterisk indicates that this data is only used in the second training stage. We removed the images from the training set that are the same as those in the testing or validation set to prevent potential data leakage.
Table 8: All training data used by Shikra. The asterisk indicates that this data is only used in the second stage. | 2306.15195#56 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 56 | (s = 81)(s2 â $) ja(s) â diinear(s)| < d (:max{01) ae (5)
where alinear(s) is the linear interpolation of two grid point a(s1) and a(s2) that are known to behave well, enforced by LLM pre-training:
alinear(s) := (1 â λ(s))a(s1) + λ(s)a(s2), λ(s) := s â s1 s2 â s1 (6)
Proof. Using Taylor expansion, we have:
1 2 1 2
a(si) = a(s) +al(s)(sâs1) + 5a"(GNlsâ 51)? 0)
alse) = a(s) +a'(s)(s~ 2) + 5a"(G)(s ~ 52)? 0)
where ξ1 â [s1, s] and ξ2 â [s, s2]. Multiplying Eqn. 9 with s â s2 and Eqn. 10 with s â s1 and subtract, we get:
a(s) ~ dinar(s) = R(6) = â ESIC) far(gyy(sâ 1) âa"G\(s=92)] AD ou â 82) | 2306.15595#56 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 56 | [61] Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, et al. Lila: A unified benchmark for mathematical reasoning. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
[62] Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. A survey of deep learning for mathematical reasoning. arXiv preprint arXiv:2212.10535, 2022.
[63] Jordan Meadows and Andre Freitas. A survey in mathematical language processing. arXiv preprint arXiv:2205.15231, 2022. 4
[64] Qingxiang Wang, Cezary Kaliszyk, and Josef Urban. First experiments with neural translation of informal to formal mathematics. In Conferences on Intelligent Computer Mathematics (CICM), 2018. 4
[65] Matthias Cosler, Christopher Hahn, Daniel Mendoza, Frederik Schmitt, and Caroline Trippel. nl2spec: Interactively translating unstructured natural language to temporal logics with large language models. arXiv preprint arXiv:2303.04864, 2023. | 2306.15626#56 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 57 | Table 8: All training data used by Shikra. The asterisk indicates that this data is only used in the second stage.
Task Dataset Captioning LLaVA-Pretraining Soptting Cap. Flickr30K Entities Grounding Cap. Visual Genome REG RefCOCO, RefCOCO+, RefCOCOg REC RefCOCO, RefCOCO+, RefCOCOg, Visual Genome VQA VQAv2 PointQA PointQA-Local/Twice, Visual-7W (âwhich boxâ subset) Dialogue LLaVA-Instruct-150K* RD VCR, Shikra-RD (Generated data from Flickr30K Entities)*
# B Examples of Task Prompts
We list some task prompts used by Shikra during training in Table 9. For every task listed, there are hundreds. These prompts are generated by GPT-4 with carefully designed instructions. We randomly selected three prompts for readersâ better under- standing. Note that during inference, there is no need to conï¬ne oneself to these forms. Users can express their needs in natural language, creating diverse and engaging task formats.
# C More Conversations with Shikra | 2306.15195#57 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 57 | Now we bound the second order derivative aâ(s). Note that for any complex number «, |Re(2)| < |x| so we have:
d/2-1 d/2-1 la"(s)| < SO |ryllot(s)l < SO |hgl 97 (12) j=0 j=0
d/2-1 1 < Ga) So Ht = Ga) joswa (13) j=0
j=0 Note that when x < 0 and c > 1, cx ⤠1 + x ln c, therefore câ4/d ⤠1 â 4/d ln c and we have:
1 1 â câ4/d ⤠1 4/d ln c = d 4 ln c (14)
So
# d
d jaâ"(s)| < (max{t51) Tne M (15)
Let the above bound to be M , we have:
|R(s)| ⤠(s â s1)(s2 â s) 2(s2 â s1) [M (s â s1) + M (s2 â s)] = M 2 (s â s1)(s2 â s) (16)
As a result:
( \( ) sâ81)(sg-â8 lls) ~ anon(s)| = [R(3)] <a (man) SHO) a7) | 2306.15595#57 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 57 | [66] Jiayi Pan, Glen Chou, and Dmitry Berenson. Data-efficient learning of natural language to linear temporal logic translators for robot task specification. In International Conference on Robotics and Automation (ICRA), 2023.
[67] Christopher Hahn, Frederik Schmitt, Julia J Tillman, Niklas Metzger, Julian Siber, and Bernd Finkbeiner. Formal specifications from natural language. arXiv preprint arXiv:2206.01962, 2022.
[68] Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja Jamnik, and Christian Szegedy. Autoformalization with large language models. In Neural Information Processing Systems (NeurIPS), 2022.
[69] Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, Sketch, and Prove: Guiding formal theorem provers with informal proofs. In International Conference on Learning Representations (ICLR), 2023. 26 | 2306.15626#57 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 58 | # C More Conversations with Shikra
We provide additional dialogue records of Shikra- 7B in this section. For instance, we showcase RD results in Figure 3, VQA (QâCBoxA) in Figure 4, and Spotting Captioning in Figure 6. We also include examples of traditional VL task forms, like OCR in Figure 5, REC in Figure 8, REG in Figure 7, and PointQA in Figure 9. Furthermore, Figure 9 and Figure 10 demonstrates that our input and output can handle points and boxes, just tell Shikra what to do.
Table 9: Examples of task templates used by Shikra on different types of training data. The explanation of placeholders in the template is as follows:â<image>â represents the input image; â<objs>â refers to the center points or bounding box of a user-speciï¬ed location; â<question>â denotes the question in the VQA dataset; â<expr>â represents the expression in the REC task. During inference, there is no need to be conï¬ned to these forms. Users can describe their needs in natural language, creating more diverse and engaging task formats. | 2306.15195#58 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 58 | ( \( ) sâ81)(sg-â8 lls) ~ anon(s)| = [R(3)] <a (man) SHO) a7)
# B VISUALIZATION OF QUANTITIES IN EXTRAPOLATION BOUND
As shown in Eqn. cB} the extrapolation bound contains the term B(s) := a "| Ansi(s)| where Ax(s) = yao e'8°i, Here we check how large the bound is. We use 0; = c ~2i/a with c = 10000 and d = 4096/ 32 = 128 (LLaMA-7B setting), and Fig. 5]shows that B(s)/d almost always larger than 1 and in many places it is much larger than 1.
15
164 144 12 4 10 4 0 1000 2000 3000 4000 Positional difference s
# B(s)/d
Figure 5: The bound B(s)/d decays with s. While the bounds goes down with large positional difference s, numerically B(s)/d ⥠1 and at many s much larger than 1 (the dotted horizontal line). Please check Appendix C.2 for the source code used to draw the ï¬gure.
16
# C CODE
# C.1 CODE FOR FIG. 2 | 2306.15595#58 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 58 | [70] Xueliang Zhao, Wenda Li, and Lingpeng Kong. Decomposing the enigma: Subgoal-based demonstration learning for formal theorem proving. arXiv preprint arXiv:2305.16366, 2023. 26
[71] Garett Cunningham, Razvan C Bunescu, and David Juedes. Towards autoformalization of mathematics and code correctness: Experiments with elementary proofs. arXiv preprint arXiv:2301.02195, 2023.
[72] Yongchao Chen, Rujul Gandhi, Yang Zhang, and Chuchu Fan. NL2TL: Transforming natural languages to temporal logics using large language models. arXiv preprint arXiv:2305.07766, 2023. 4
[73] Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. Generalization through memorization: Nearest neighbor language models. In International Conference on Learning Representations (ICLR), 2020. 4
[74] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In International Conference on Machine Learning (ICML), 2020. | 2306.15626#58 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 59 | Task Three randomly chosen examples from hundreds. Describe this image <image> as simply as possible. What is the content of the image <image>? Please answer in short sentences. Summarize the content of the photo <image>. Captioning Can you provide a description of the image <image> and include the coordinates [x0,y0,x1,y1] for each mentioned object? Please explain whatâs happening in the photo <image> and give coordinates [xmin,ymin,xmax,ymax] for the items you reference. How would you describe the contents of the image <image>? Please provide the positions of mentioned objects in square brackets. Spotting Cap. Can you give me a description of the region <objs> in image <image>? Describe whatâs happening within the coordinates <objs> of the given image <image>. What does the area <objs> within the given visual <image> contain? Grounding Cap. For the given image <image>, can you provide a unique description of the area <objs>? In the photo <image>, how would you describe the selected area <objs> uniquely? Can you provide a description for the region <objs> in the image <image> such that it sets it apart | 2306.15195#59 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 59 | # build basis function d = 4096 // 32 theta = 10000 # Frequency computation, freqs = 1.0 / (theta ** (torch.arange(0, d, 2)[: (d // 2)].float() / d)) # construct basis function L = 2048 x = torch.zeros(L) x[:L] = torch.arange(0, L) # basis functions xfreq = torch.outer(x, freqs) y = torch.randn(x.shape[0]) # do linear regression X = torch.cat([xfreq.sin(), xfreq.cos()], dim=1) eps = 0.000 coeffs = torch.linalg.solve(X.t() @ X + torch.eye(X.shape[1]) * eps, X.t() @ y) x2 = torch.arange(0, 2*L) xfreq2 = torch.outer(x2, freqs) X2 = torch.cat([xfreq2.sin(), xfreq2.cos()], dim=1) y2 = X2 @ coeffs x3 = torch.arange(25, 75, 0.125) xfreq3 = torch.outer(x3, | 2306.15595#59 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 59 | [75] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Neural Information Processing Systems (NeurIPS), 2020.
[76] Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning (ICML), 2022.
[77] Zonglin Li, Ruiqi Guo, and Sanjiv Kumar. Decoupled context processing for context augmented language modeling. In Neural Information Processing Systems (NeurIPS), 2022.
14
[78] Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, and Graham Neubig. Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022. | 2306.15626#59 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 60 | would you describe the selected area <objs> uniquely? Can you provide a description for the region <objs> in the image <image> such that it sets it apart from others? REG I want to know the answer to â<question>â Refer to the image <image> and give a clear response. Answer this question directly after referring to the image <image>: <question> Examine the image <image> and provide a brief answer for â<question>â Q â A Having a look at image <image>, can you tell me the answer to my question â<question>â and the logic leading to it? Please answer the following question â<question>â based on the image <image>, and describe your thought process Upon analyzing the image <image>, please ï¬nd the answer to my question â<question>â and provide a detailed explanation. QâCA Analyze the image <image> and answer â<question>â Include your reasoning process and mark center points of related objects as [cx, cy]. Based on <image>, please respond to â<question>â Include your thought process and note involved objects using [cx, cy] for their center points. While observing image <image>, kindly answer | 2306.15195#60 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 60 | y2 = X2 @ coeffs x3 = torch.arange(25, 75, 0.125) xfreq3 = torch.outer(x3, freqs) X3 = torch.cat([xfreq3.sin(), xfreq3.cos()], dim=1) y3 = X3 @ coeffs plt.figure(figsize=(16,5)) plt.subplot(1, 3, 1) plt.plot(x2[:L], y2[:L], "r") plt.scatter(x, y) plt.ylabel("attention score $a(s)$") plt.xlabel("Positional difference $s$") plt.subplot(1, 3, 2) plt.plot(x2, y2, "r") plt.scatter(x, y) plt.axvline(L, color="k", linestyle="--", linewidth=0.5) plt.title("Effect of Extrapolation") plt.xlabel("Positional difference $s$") | 2306.15595#60 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 60 | [79] Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. Memorizing transformers. In International Conference on Learning Representations (ICLR), 2022.
[80] Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299, 2022.
[81] Zexuan Zhong, Tao Lei, and Danqi Chen. Training language models with memory augmentation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022. 4
[82] Shirley Anugrah Hayati, Raphael Olivier, Pravalika Avvaru, Pengcheng Yin, Anthony Tomasic, and Graham Neubig. Retrieval-based neural code generation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018. 4
[83] Md Rizwan Parvez, Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. Retrieval In Findings of the Association for Computational augmented code generation and summarization. Linguistics: EMNLP, 2021. | 2306.15626#60 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 61 | respond to â<question>â Include your thought process and note involved objects using [cx, cy] for their center points. While observing image <image>, kindly answer â<question>â Elaborate on your reasoning process and tag any object center points involved [x,y]. QâCPointA <question> Please offer your reasoning process, and provide bounding boxes of mentioned objects within square brackets. Here is the picture <image> Please explain your reasoning and provide bounding boxes, denoted by square brackets, for the objects mentioned in the picture <image>. <question> Consider the image <image>, and then provide a well-reasoned answer to the question â<question>â Donât forget to mark relevant object locations using [x0,y0,x1,y1]. QâCBoxA In the given <image>, could you ï¬nd and tell me the coordinates of <expr>? I need the coordinates of <expr> in <image>, can you please assist me with that? Locate <expr> in <image> and provide its coordinates, please. REC | 2306.15195#61 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 61 | plt.subplot(1, 3, 3) plt.plot(x3, y3, "r") for i in range(25,75): plt.axvline(i, color="k", linestyle="--", linewidth=0.5) plt.title("Effect of Interpolation") plt.xlabel("Positional difference $s$") plt.show()
17
C.2 CODE FOR FIG. 5
L = 2048 x = torch.arange(0, 2*L) d = 4096 // 32 theta = 10000 freqs = 1.0 / (theta ** (torch.arange(0, d, 2)[: (d // 2)].float() / d)) xfreq = torch.outer(x, freqs) mags = (xfreq.sin().cumsum(dim=1).pow(2) + xfreq.cos().cumsum(dim=1).pow(2)).sqrt() plt.plot(mags.sum(dim=1)/d) plt.axhline(1.0, color=âkâ, linestyle="--") plt.xlabel("Positional difference $s$") plt.ylabel("$B(s)/d$") plt.show()
18 | 2306.15595#61 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 61 | [84] Shuai Lu, Nan Duan, Hojae Han, Daya Guo, Seung-won Hwang, and Alexey Svyatkovskiy. ReACC: A retrieval-augmented code completion framework. In Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
[85] Shuyan Zhou, Uri Alon, Frank F Xu, Zhengbao Jiang, and Graham Neubig. DocPrompting: Generating code by retrieving the docs. In International Conference on Learning Representations (ICLR), 2023.
[86] Disha Shrivastava, Hugo Larochelle, and Daniel Tarlow. Repository-level prompt generation for large language models of code. arXiv preprint arXiv:2206.12839, 2022.
[87] Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. RepoCoder: Repository-level code completion through iterative retrieval and generation. arXiv preprint arXiv:2303.12570, 2023. | 2306.15626#61 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 62 | Describe this image <image> as simply as possible. What is the content of the image <image>? Please answer in short sentences. Summarize the content of the photo <image>. Can you provide a description of the image <image> and include the coordinates [x0,y0,x1,y1] for each mentioned object? Please explain whatâs happening in the photo <image> and give coordinates [xmin,ymin,xmax,ymax] for the items you reference. How would you describe the contents of the image <image>? Please provide the positions of mentioned objects in square brackets. Can you give me a description of the region <objs> in image <image>? Describe whatâs happening within the coordinates <objs> of the given image <image>. What does the area <objs> within the given visual <image> contain? For the given image <image>, can you provide a unique description of the area <objs>? In the photo <image>, how would you describe the selected area <objs> uniquely? Can you provide a description for the region <objs> in the image <image> such that it sets it apart from others? I want to know the answer to â<question>â Refer to the image | 2306.15195#62 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15626 | 62 | [88] Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, and Bing Xiang. CoCoMIC: Code completion by jointly modeling in-file and cross-file context. arXiv preprint arXiv:2212.10007, 2022. 4
[89] David Thrane Christiansen. Functional programming in Lean, 2023. 4
[90] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. Datasheets for datasets. Communications of the ACM, 64(12):86â92, 2021. 6, 22
[91] Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. Optimizing dense retrieval model training with hard negatives. In International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2021. 8
[92] Jing Lu, Gustavo Hernandez Abrego, Ji Ma, Jianmo Ni, and Yinfei Yang. Multi-stage training with improved negative contrast for neural passage retrieval. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021. 8 | 2306.15626#62 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 63 | region <objs> in the image <image> such that it sets it apart from others? I want to know the answer to â<question>â Refer to the image <image> and give a clear response. Answer this question directly after referring to the image <image>: <question> Examine the image <image> and provide a brief answer for â<question>â Having a look at image <image>, can you tell me the answer to my question â<question>â and the logic leading to it? Please answer the following question â<question>â based on the image <image>, and describe your thought process Upon analyzing the image <image>, please ï¬nd the answer to my question â<question>â and provide a detailed explanation. Analyze the image <image> and answer â<question>â Include your reasoning process and mark center points of related objects as [cx, cy]. Based on <image>, please respond to â<question>â Include your thought process and note involved objects using [cx, cy] for their center points. While observing image <image>, kindly answer â<question>â Elaborate on your reasoning process and tag any object center points involved [x,y]. <question> Please offer your reasoning | 2306.15195#63 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15626 | 63 | [93] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 8
[94] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. StarCoder: may the source be with you! arXiv preprint arXiv:2305.06161, 2023. 8, 32
[95] Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333â389, 2009. 8
[96] Leonardo de Moura, Jeremy Avigad, Soonho Kong, and Cody Roux. Elaboration in dependent type theory. arXiv preprint arXiv:1505.04324, 2015. 18
15 | 2306.15626#63 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 64 | kindly answer â<question>â Elaborate on your reasoning process and tag any object center points involved [x,y]. <question> Please offer your reasoning process, and provide bounding boxes of mentioned objects within square brackets. Here is the picture <image> Please explain your reasoning and provide bounding boxes, denoted by square brackets, for the objects mentioned in the picture <image>. <question> Consider the image <image>, and then provide a well-reasoned answer to the question â<question>â Donât forget to mark relevant object locations using [x0,y0,x1,y1]. In the given <image>, could you ï¬nd and tell me the coordinates of <expr>? I need the coordinates of <expr> in <image>, can you please assist me with that? Locate <expr> in <image> and provide its coordinates, please. | 2306.15195#64 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15626 | 64 | 15
[97] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â5551, 2020. 23
[98] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. DeepSpeed: System optimizations enable training deep learning models with over 100 billion parameters. In International Conference on Knowledge Discovery and Data Mining (KDD), 2020. 24
[99] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations (ICLR), 2019. 24
[100] Mathlib Community. Mathport: A tool for porting Lean 3 projects to Lean 4. URL https://github. com/leanprover-community/mathport. 27
[101] OpenAI. ChatGPT plugins. https://openai.com/blog/chatgpt-plugins, 2023. URL https: //openai.com/blog/chatgpt-plugins. 29 | 2306.15626#64 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 65 | = How can I complete this puzzle[0.251,0.385,0.548,0.630]? To complete the puzzle[0.002,0.126,0.998,0.872], you @ need to place the missing piece[0.251,0.385,0.548,0.630] â into the empty space[0.548,0.322,0.790,0.580].
Figure 3: Referential Dialogue using Shikra-7B.The dashed box on an image represents the area referred to by the user or jointly referred to by Shikra, while the solid box represents the area solely referred to by Shikra. | 2306.15195#65 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15626 | 65 | [102] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023. 31, 33
[103] Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. CodeGen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474, 2022. 32
[104] Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, et al. CodeGeeX: A pre-trained model for code generation with multilingual evaluations on HumanEval-X. arXiv preprint arXiv:2303.17568, 2023. 32 | 2306.15626#65 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 66 | Q > CBox,4 8 Is this a rabbit? The animal is a dog [0.316,0.278,0.600,0.562], not a rabbit.¢ Although it is wearing a pink hooded blanket [0.282,0.088, â= 0.680,0.898] and a bunny-like hat [0.292,0.080,0.630,0.348], it is a dog. Therefore, the answer is no. Q > Box A | gS What is this animal? | 0.916]. It has a very distinctive appearance, with a mix of human clothing and characteristics. The animal in the image is a giraffe [0.350,0.062,0.600, @
clothing and characteristics. using Shikra-7B. It asks models generate grounded explanation for the answer. OCR What is written on this board?[0.656,0.475,0.888,0.584] The word âChangeâ is written on the board & [0.656,0.475,0.888,0.584].
Figure 4: QâCBoxA using Shikra-7B. It asks models generate grounded explanation for the answer.
Figure 5: OCR using Shikra-7B. We do not have explicit OCR datasets in Shikra training. | 2306.15195#66 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15626 | 66 | [105] Lili Yu, Dániel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. MegaByte: Predicting million-byte sequences with multiscale transformers. arXiv preprint arXiv:2305.07185, 2023. 32
[106] Gautier Izacard and Ãdouard Grave. Leveraging passage retrieval with generative models for open domain question answering. In European Chapter of the Association for Computational Linguistics (EACL), 2021. 33
[107] Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. Transformer memory as a differentiable search index. In Neural Information Processing Systems (NeurIPS), 2022. 34
[108] Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Scott Yih, Sebastian Riedel, and Fabio Petroni. Autoregressive search engines: Generating substrings as document identifiers. 2022. | 2306.15626#66 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 67 | Spotting Captioning [a |B f= 8 What do you see in this image? Please mention the objects and TI âtheir locations using the format [x1,y1,x2,y2]. A young boy [(.422,0.33,0.652,0.752] casts a fishing line & [0.592,0.344,0.97,0.416] over the railing [0.562,0.172,0.996, 0.83] at a pier [0.002,0.584,0.794,0.832]. Spotting Captioning =} Provide a description of the image and include the boxes of the items you "reference. A boy [0.262,0.226,0.794,0.626] with blue shorts [0.354,0.354,0.634, & 0.554] and no shoes [0.252,0.532,0.35,0.624;0.594,0.59,0.668,0.648] is jumping on a bed [0.17,0.416,0.83,1.0] with white sheets and green blanket [0.28,0.46,0.83,0.998]. Spotting Captioning =p Please interpret this image and | 2306.15195#67 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15195 | 68 | with white sheets and green blanket [0.28,0.46,0.83,0.998]. Spotting Captioning =p Please interpret this image and give coordinates [x1,y1,x2,y2] "~ for each object you mention. A young boy [(.568,0.386,0.768,0.69] in a blue shirt [0.578, & 0.428,0.69,0.562] and blue pants [0.582,0.532,0.682,0.686] is swinging a bat [0.736,0.428,0.85,0.522] at a red ball [0.494, 0.364, 0.558,0.428] in front of a red machine [0.332,0.35,0.548,0.706]. | 2306.15195#68 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15626 | 68 | A.1 Extracting Premise Information from Leanâs Elaborator . . . . . . . . . . . . . . . A.2 Reliable Interaction with Lean . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.3 Comparison with Existing Tools for Learning-Based Theorem Proving in Lean . . B.1 Dataset Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2 Datasheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.3 Data Hosting, Licensing, and Maintenance . . . . . . . . . . . . . . . . . . . . . . C.1 Details and Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.2 The GPT-4 Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.3 Justifications | 2306.15626#68 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 70 | Figure 7: Referring Expression Generation (REG) using Shikra-7B. The purpose of REG is to generate a unique description for a speciï¬ed location.
Referring Expression Compression Referring Expression Compression shirt and black shorts and provide its coordinates. =} Would you kindly provide the coordinates of a brown teddy bear with a blue bow located in the picture? =} In the picture, Iâd like you to locate a man in a white tee | Answer: [0.047,0.370,0.347,0.666]. Answer: [0.088,0.380,0.227,0.677]. Referring Expression Compression Referring Expression Compression =} Detect the location of the vanilla dessert with a cashew nose in image and share the coordinates with me, please. has a green box on it? | =} Where is the bottle on the end that says chardonnay and Answer: [0.081,0.125,0.277,0.766]. Referring Expression Compression I'd like to know the exact coordinates of the rod / ski with name trak nowax in the photo? image. Can you give me its coordinates? | =} I would like to find long streamer on kite in the air in | Answer: [0.056,0.362,0.640,0.442]. | 2306.15195#70 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15626 | 70 | 17
# A LeanDojo Technical Details
We provide more information on how LeanDojo extracts data from and interacts with Lean.9 For further details, please check our open-source implementation.
# A.1 Extracting Premise Information from Leanâs Elaborator
âPremisesâ in this paper belong to a category of Lean expressions called âconstants.â In Lean, definitions of constants are grouped into nested, hierarchical namespaces. Therefore, each premise has a unique fully-qualified name. For example, mod_self in Fig. 2 is defined in the namespace nat; therefore, its fully qualified name is nat.mod_self. However, it would be too verbose if premises had to be referred to using full names. In practice, tactics often refer to premises using short names such as mod_self. In case multiple premises share the same short name, Lean automatically infers the correct one from the context through a process called âname resolutionâ. LeanDojo is able to trace the input/output of Leanâs name resolution and thereby extract accurate premise information for training the retriever. | 2306.15626#70 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 71 | Figure 8: Referring Expression Comprehension (REC) using Shikra-7B. The task aims to localize a target object in an image described by a referring expression. PointQA-Box | © What color is this cushion?{0.690,0.506] | ââ ss: an What color is this shirt?[0.414,0.420] The answer is orange. & The answer is red. =
Figure 9: PointQA using Shikra-7B. The task asks models to answer questions about the region speciï¬ed by the user, either by center point or box. | 2306.15195#71 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15626 | 71 | Name resolution in Lean is implemented in a process called âelaboration,â which happens after parsing but before the parsed expressions are checked by Leanâs trusted kernel. Elaboration takes as input user-entered expressions (called âpre-expressionsâ) that are concise, partially specified, and potentially ambiguous. It turns them into complete expressions ready to be checked by the kernel. This is realized by inferring not only full names but also missing types, implicit arguments, overloading, type coercion, etc. Please refer to de Moura et al. [96] for details on Leanâs elaboration process. In LeanDojo, we modify Leanâs internal implementation, intercepting the elaborator to record its input/output:
Pre-expression: The input to Leanâs elaborator, including where premises are used in proofs. ⢠Expression: The output of the elaborator, including the premiseâs full name and where it is defined.
Locations are spans in the source code, specified by the file name and the row/column numbers of its start/end. Our modification takes the form of a Git patch that LeanDojo can automatically apply to any version of Lean 3 after March 24, 2022.
# A.2 Reliable Interaction with Lean | 2306.15626#71 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 72 | user, either by center point or box. PointQA-V7W PointQA â V7W (Point Output) Which area is part of the building structure? [0.001, 0.657,0.115,0.849;0.073,0.546,0.146,0.653;0.793,0.697, 0.904,0.837;0.739,0.169,0.970,0.446] Which device hits baseballs? Candidates: [0.478,0.338, 0.576,0.418] [0.418,0.194,0.572,0.284] [0.296,0.114,0.440, 0.636] [0.330,0.964,0.700,0.998] answer in point format. The answer is [0.739,0.169,0.970,0.446]. | The answer is [0.368,0.374]. PointQA-V7W PointQA-TWICE Which device uses paper and ink? [0.039,0.774,0.362, 0.869;0.832,0.616,0.940, 0.700; 0.004,0.544,0.193,0.733; | 2306.15195#72 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15626 | 72 | # A.2 Reliable Interaction with Lean
Polu et al. [19] introduced lean-gym. To our knowledge, it is the only mature, open-source tool before LeanDojo for interacting with Lean programmatically. However, we found severe issues with lean-gym: About 21.1% of the correct, human-written proofs are misjudged as incorrect, leading to two problems: First, it underestimates the proverâs evaluation performance. Second, the results are too noisy as feedback signals for reinforcement learning.
After carefully analyzing lean-gymâs implementation, we identified the root cause of the problem. When proving a theorem, the environment used by lean-gym is subtly different from the original environment used by humans. Specifically, lean-gym fails to handle namespaces correctly (illustrated in Fig. A). As a result, name resolution fails unexpectedly when checking correct proofs. | 2306.15626#72 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 73 | For example, Fig. A compares the correct environment and the environment constructed by lean- gym. The theorem should be inside the namespace âbufferâ. However, in lean-gym, it merely opens the namespace. These two scenarios are different when it comes to name resolution. Being inside a namespace instructs Lean to favor constants defined in that namespace, whereas opening a namespace does not have such an effect. In this example, the short name âreadâ is ambiguous: We have âmonad_reader.readâ defined in âinit/control/reader.leanâ and âbuffer.readâ In the correct environment, the âreadâ in âunfold readâ defined in âdata/buffer.leanâ. resolves to âbuffer.readâ. Whereas in lean-gymâs environment, it incorrectly resolved to âmonad_reader.readâ. Lean complains that âreadâ is not an equational lemma, because it is referring to a wrong âreadâ. LeanDojo does not suffer from this kind of error since it uses a different mechanism for constructing the environment. Specifically, it wraps the interaction code as a Lean tactic, which is inserted into the proof. Therefore, the environment is guaranteed to be correct. | 2306.15626#73 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 74 | 9âLeanâ in our paper refers to Lean 3 by default. Lean 4 is not backward-compatible but is also supported by LeanDojo. Our Lean 4 results are in Appendix D.
18
We quantitatively compare lean-gym and LeanDojo on the number of proof check- ing errors. study, we use Lean v3.42.1 paired with mathlib version 6e5ca7d0097313e59f7533a42e3ea5197484c775 since they are supported by both tools. We use LeanDojo to extract all tactic-style proofs and enter them into both tools. These proofs are all correct, but lean-gym failed on 21.1% of them. In contrast, LeanDojo only failed on 1.4%, and its failures are a subset of lean-gymâs. We include this study in our open-source repo and document example proofs from the remaining 1.4% to provide transparency on LeanDojoâs limitations.10
import data.buffer universe u | 2306.15626#74 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 75 | import data.buffer universe u
universe u namespace buffer theorem my_read_eq_readâ {a : Type u} [inhabited a] (b : buffer a) (i : nat) (h: i < b.size) : read b (i, h) = readâ b i := begin cases b, unfold read, unfold readâ, simp [array.read_eq_readâ] end end buffer
end buffer
# import
# data.buffer
# universe
# u
# open
# buffer
theorem my_read_eq_readâ (b : buffer a) (i : nat) read (i, h) = readâ
# b
# b
# cases
b,
unfold read; unfold readâ, [array.read_eq_readâ]
# simp end
# {a
(h:
# i
:
:=
# Type u} b.size) begin
[inhabited a]
<
# i
# Correct environment
# lean-gymâs environment
ERROR:
# unfold
have
# equational
# tactic
failed, lemmas is
# nor
# ~read*
does
# a
# projection
# not | 2306.15626#75 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 76 | # lean-gymâs environment
ERROR:
# unfold
have
# equational
# tactic
failed, lemmas is
# nor
# ~read*
does
# a
# projection
# not
Figure A: An example of correct proofs misjudged as incorrect by lean-gym, adapted from the theorem read_eq_readâ in âdata/buffer.leanâ of Leanâs standard library. The error message is because lean-gym failed to resolve the short name âreadâ to the correct fully-qualified name. The Lean code in this figure is only for illustrative purposes. It does not reflect the implementation technique used by lean-gym to construct the environment. Instead of generating actual Lean code, lean-gym uses Leanâs metaprogramming APIs to construct the environment.
# A.3 Comparison with Existing Tools for Learning-Based Theorem Proving in Lean
To our knowledge, LeanStep [16]11 and lean-gym [19] are the only published tools for learning-based theorem proving in Lean. There are a few unpublished prototypes, such as repl, lean-client-python, and lean-gym for Lean 4, none of which is mature enough or is under active development. Therefore, we only compare LeanDojo with LeanStep and lean-gym (summarized in Table A). | 2306.15626#76 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 77 | Functionality. LeanDojo supports both data extraction and interacting with Lean programmatically. In contrast, LeanStep is only for data extraction, and lean-gym is only for interacting with Lean. They are not actively maintained, so they do not support recent versions of mathlib (tested on August 11, 2023, using mathlib commit 19c869efa56bbb8b500f2724c0b77261edbfa28c). Also, neither of them support Lean 4 (Appendix D). LeanDojo fully supports recent mathlib and Lean 4. Furthermore, LeanStep cannot extract premise information and is not applicable to repos other than mathlib. Last, LeanDojo comes with comprehensive documentation and unit tests, whereas other tools barely have any.
Implementation details. LeanStep and LeanDojo use different mechanisms to extract ASTs and proof trees. LeanStep implements an ad-hoc parser in Python for parsing Lean code into ASTs. It also intercepts Leanâs tactic system to insert logging code. Then the logs are used to reconstruct proof trees. This implementation is brittle and does not work for the current versions of Lean/mathlib. In contrast, LeanDojo relies on Leanâs built-in mechanisms for exporting ASTs and proof states (lean ââast ââtsast ââtspp), which works robustly for recent Lean/mathlib. This mechanism was developed after LeanStep. | 2306.15626#77 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 78 | # 10https://github.com/lean-dojo/LeanDojo/blob/main/tests/interaction/test_
# unexpected_errors.py
11LeanStep is technically a dataset. We are referring to the lean_proof_recording tool for extracting it.
19
Regarding interaction with Lean, both lean-gym and LeanDojo rely on Leanâs metaprogramming APIs, and LeanDojo partially builds upon lean-gymâs code. However, lean-gym has a critical issue in that it misjudges many correct proofs as incorrect (Appendix A.2). The main reason is that lean-gym fails to distinguish two subtly different cases when constructing the proof environment: (1) opening a namespace; (2) being inside a namespace. LeanDojo does not suffer from this issue. Instead of operating as a standalone program in the IO monad, it wraps the interaction code into a special tactic, which is inserted into the correct location in the proof. Therefore, the interaction code is guaranteed to run in the same environment as the original human-written proof. | 2306.15626#78 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 79 | Data extraction Interaction Premise information Lean 4 support Recent mathlib Repos other than mathlib Estimated errors Lean 4 support Recent mathlib Repos other than mathlib Documentation & unit tests LeanStep [16] â â â â N/A N/A N/A N/A â N/A N/A N/A N/A â â â â 21.1% â â â 1.4% â â â â â
# lean-gym [19] LeanDojo (ours)
Table A: Comparing LeanDojo with existing tools for data extraction and interaction with Lean.
# B LeanDojo Benchmark
# B.1 Dataset Format
We describe the data format of LeanDojo Benchmark, which has the following directory structure:
/ | 2306.15626#79 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 80 | # B LeanDojo Benchmark
# B.1 Dataset Format
We describe the data format of LeanDojo Benchmark, which has the following directory structure:
/
corpus.jsonl.............All premises defined in mathlib and Leanâs standard library metadata.json.........................................................Metadata licenses lean.....................................Attribution to Leanâs Apache 2.0 license mathlib..............................Attribution to mathlibâs Apache 2.0 license README.md.........Statement that LeanDojo Benchmark is released under CC BY 2.0 random.........................................Theorems/proofs of the random split train.json...................................................94,734 theorems val.json......................................................2,000 theorems test.json.....................................................2,000 theorems novel_premises.......................Theorems/proofs of the novel_premises split train.json...................................................94,734 theorems val.json......................................................2,000 theorems test.json.....................................................2,000 theorems | 2306.15626#80 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 81 | Premise Definitions. corpus.jsonl contains the definition of premises. It has 3,280 lines. Each line is in JSON format and corresponds to a Lean file. Below is an example for âinit/con- trol/functor.leanâ, which directly imports three other files: âinit/core.leanâ, âinit/func- tion.leanâ, and âinit/meta/name.leanâ. It defines two constants that can be used as premises: âfunctorâ and âfunctor.map_const_revâ. For each premise, we have access to its full name, the source code, and its start/end location within the file.
" path " : " _target / deps / lean / library / init / control / functor . lean " , " imports " : [ " _target / deps / lean / library / init / core . lean " , " _target / deps / lean / library / init / function . lean " , " _target / deps / lean / library / init / meta / name . lean " ] , " premises " : [
20 | 2306.15626#81 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 82 | 20
{ " full_name " : " functor " , " code " : " class functor ( f : Type u â Type v ) : Type ( max ( u + 1 ) v ) : =\ n ( map : Î {α β : Type u } , (α â β ) â f α â f β ) \ n ( map_const : Î {α β : Type u } , α â f β â f α : = λ α β , map ⦠const β ) " , " start " : [ 1 1 , 1 ] , " end " : [ 1 3 , 7 0 ] , " kind " : " class " } , { " full_name " : " functor . map_const_rev " , " code " : " @ [ reducible ] def functor . map_const_rev { f : Type u â Type v } [ functor f ] {α β : Type u } : f β â α â f α : =\ nλ a b , b <$ a " , " start " : [ 1 8 , 1 ] , " end " : [ 1 9 , 1 4 ] , " kind " : " definition " }
] | 2306.15626#82 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 83 | ]
Theorems and Tactics. Theorems in LeanDojo Benchmark are split into training/validation/testing using two different strategies (Sec. 4). They are formatted in JSON, and below is an example corresponding to the theorem âreal.angle.to_real_pi_div_twoâ. LeanDojo has recorded two tactics: âsplitâ and âlinarith [pi_pos]â. For each tactic, we have the proof states before/after it. The âlinarith [pi_pos]â tactic illustrates how premises are recorded: They are annotated using HTML-like strings such as âlinarith [<a>pi_pos</a>]â, followed by a âprovenance listâ. Each element in the list corresponds to a premise in the tactic. | 2306.15626#83 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 84 | " url " : " https : // github . com / leanprover - community / mathlib " , " commit " : " 1 9 c 8 6 9 efa 5 6 bbb 8 b 5 0 0 f 2 7 2 4 c 0 b 7 7 2 6 1 edbfa 2 8 c " , " file_path " : " src / analysis / sp ecial_fu nctions / trigonometric / angle . lean " , " full_name " : " real . angle . to_re al_p i_d iv_t wo " , " start " : [ 5 1 2 , 9 ] , " end " : [ 5 1 3 , 5 6 ] , " traced_tactics " : [ { " tactic " : " split " , " annotated_tactic " : [ " split " , [ ] ] , " state_before " : "⢠-Ï < Ï / 2 â§ Ï / 2 ⤠Ï" , " state_after " : " 2 goals \ n⢠-Ï < Ï / 2 \ n \ nâ¢ Ï / 2 ⤠Ï" } , { " tactic " : " linarith [ pi_pos ] " , " annotated_tactic " : [ " linarith [ <a > pi_pos </ a > ] " , [ { " full_name " : " real . | 2306.15626#84 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 86 | 21
]
Not all theorems have tactic-style proofs. For those without tactic-style proofs, concatenating the tactics does not lead to a complete proof of the original theorem. However, this is not an issue when using the data for theorem proving evaluation or for training tactic generators.
# B.2 Datasheet
We present a datasheet [90] for documentation and responsible usage of LeanDojo Benchmark.
# Motivation.
⢠For what purpose was the dataset created? It was created as a benchmark for learning-based theorem proving in Lean.
⢠Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? It was created by the authors of this paper.
⢠Who funded the creation of the dataset? See the acknowledgments in Sec. 7.
# Composition.
⢠What do the instances that comprise the dataset represent (e.g., documents, photos, people, coun- tries)? The dataset consists of formal definitions, theorems, and proofs written in Lean [1].
⢠How many instances are there in total (of each type, if appropriate)? The dataset has 98,734 theorems and their proofs, as well as 130,262 premises defined in 3,384 files. | 2306.15626#86 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 87 | ⢠Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? The dataset contains all theorems/proofs that LeanDojo can extract from the commit 19c869efa56bbb8b500f2724c0b77261edbfa28c of mathlib released on October 11, 2023.
⢠What data does each instance consist of? Theorems/proofs in the dataset are Lean code written by programmers and mathematicians.
⢠Are relationships between individual instances made explicit? Definitions in the dataset are linked to proofs using them as premises.
⢠Are there recommended data splits? Yes, we recommend two data splits: random and novel_premises. Please see Sec. 4 for details.
⢠Are there any errors, sources of noise, or redundancies in the dataset? ASTs extracted by LeanDojo contain a small number of errors due to potential flaws in Leanâs AST exporting mechanism. However, they do not have a tangible impact on our work.
⢠Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained. | 2306.15626#87 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 88 | ⢠Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
⢠Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individualsâ non-public communications)? No.
⢠Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? No.
# Collection Process.
⢠How was the data associated with each instance acquired? The data is directly observable by opening mathlib in VS Code with the Lean plugin. However, we had to instrument Lean to export the data programmatically.
⢠What mechanisms or procedures were used to collect the data (e.g., hardware apparatuses or sensors, manual human curation, software programs, software APIs)? The data was generated by building a Lean repo using our modified Lean and postprocessing the exported data.
22
⢠Who was involved in the data collection process (e.g., students, crowd workers, contractors), and how were they compensated (e.g., how much were crowd workers paid)? No manual effort was involved in the data collection process.
⢠Over what timeframe was the data collected? The final version of the dataset was generated in October 2023.
Uses. | 2306.15626#88 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 89 | ⢠Over what timeframe was the data collected? The final version of the dataset was generated in October 2023.
Uses.
⢠Has the dataset been used for any tasks already? We have used the dataset for training and evaluating machine learning models on the tasks of premise selection and theorem proving.
⢠Is there a repository that links to any or all papers or systems that use the dataset? Yes, https: //leandojo.org.
# Distribution.
⢠Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? Yes, the dataset is publicly available on the Internet.
⢠How will the dataset be distributed (e.g., tarball on website, API, GitHub)? The dataset can be downloaded as a tarball.
⢠Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? The dataset is distributed under CC BY 2.0. The data generation code is distributed under the MIT license. The dataset was extracted from mathlib, which depends on lean. Both of them are distributed under the Apache 2.0 license. We include their licenses in the dataset as attribution (Appendix B.1).
⢠Have any third parties imposed IP-based or other restrictions on the data associated with the instances? No. | 2306.15626#89 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 90 | ⢠Have any third parties imposed IP-based or other restrictions on the data associated with the instances? No.
⢠Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? No.
# Maintenance.
Who will be supporting/hosting/maintaining the dataset? The authors of this paper. ⢠How can the owner/curator/manager of the dataset be contacted (e.g., email address)? Please
contact Kaiyu Yang at [email protected].
Is there an erratum? No. ⢠Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)?
Please check https://leandojo.org for any update.
⢠If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? Yes, they can use our data generation code, which is publicly available.
# B.3 Data Hosting, Licensing, and Maintenance | 2306.15626#90 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 91 | # B.3 Data Hosting, Licensing, and Maintenance
LeanDojo Benchmark is distributed under the CC BY 2.0 license. The data is hosted on zenodo.org (a long-term data repository operated by CERN). The LeanDojo tool for data extraction and interaction with Lean is released at https://github.com/lean-dojo/LeanDojo under the MIT license. Our model checkpoints are hosted on Hugging Face Hub. LeanDojoâs documentation is hosted on Read the Docs at https://leandojo.readthedocs.io. LeanDojoâs website (https://leandojo.org) is the entry point for everything related to it, including any future updates or maintenance.
# C Experiments
# C.1 Details and Hyperparameters
The premise retriever and tactic generator in ReProver are initialized by the google/byt5-small checkpoint on Hugging Face. It is a T5-like [97] encoder-decoder Transformer that operates directly
23
on UTF-8 bytes without tokenization. We choose ByT5 [44] instead of T5 because Lean code makes extensive use of Unicode math symbols, which may cause problems to T5âs pretrained tokenizer. The retriever uses the encoder only, whereas the generator uses both the encoder and the decoder. | 2306.15626#91 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 92 | In training, we use one NVIDIA A100 GPU with 80GB of memory. The code is implemented in PyTorch and PyTorch Lightning, with bfloat16 mixed precision and DeepSpeed ZeRO Stage 2 [98]. Both the retriever and the generator are optimized using AdamW [99] with a batch size of 8. In the first 2,000 steps, the learning rate warms up linearly from 0 to the maximum value. Then it decays to 0 following a cosine schedule. The maximum learning rate is 10â4 for the retriever and 5 Ã 10â4 for the generator. When training the retriever, we sample 3 negative premises for each example, including 1 in-file negative premise. When training the generator, we apply dropout to retrieved premises with a dropout rate of 0.5. Then, we truncate the generatorâs input to 2,300 tokens.
During evaluation, the tactic generator is combined with best-first search to find proofs. At each search step, it produces 64 tactic candidates using beam search. Each tactic is associated with a log-likelihood score. In best-first search, we prioritize the states by the sum of log-likelihoods of tactics leading to that state.
# C.2 The GPT-4 Baseline | 2306.15626#92 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 93 | # C.2 The GPT-4 Baseline
Now we describe the GPT-4 [27] baseline in Sec. 6. Similar to ReProver, it is a tactic gen- erator combined with best-first search. However, the tactic generator is based on GPT-4âs capability to follow instructions in zero-shot. Specifically, given a proof state, we use the following prompt to instruct GPT-4 to produce a list of tactics, each paired with a confidence score:
Prompt Template: You are an expert in Lean3 theorem proofs. Lean3 theorem âTHEOREM_FULL_NAME â from the mathlib file âFILE_PATH â. The current tactic state is: tactics to progress in solving âTHEOREM_FULL_NAME â, along with their confidence levels as a float between 0 and 1. Rank them in order of effectiveness. Present the tactics and their confidence levels as comma- separated tuples in this format: #(tactic_{1}, confidence_{1})#, #(tac- tic_{2}, confidence_{2})#, ..., #(tactic_{35 }, confidence_{35 })#. | 2306.15626#93 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 94 | We adapted the prompt to a particular theorem and state by substituting the variables with the appropriate values. Given the inherent variability in GPT-4âs outputs, we requested 35 and filtered out invalid ones. We used a token length limit of 1,024 and kept all other API parameters at their default values. Below are a few example prompts and GPT-4âs responses:
Example Prompt 1: You are an expert in Lean3 theorem proofs. Lean3 theorem âpolynomial.chebyshev.aeval_Uâ from the mathlib file âmathlib/src/analysis/special_functions/trigonometric/chebyshev.leanâ. The current tactic state is: comm_ring R, _inst_2 : : A, n : N ⢠â(aeval x) (chebyshev.U R n) = eval x (chebyshev.U A : n)â. mial.chebyshev.aeval_Uâ, along with their confidence levels as a float between 0 and 1. tactics and their confidence levels as comma-separated tuples in this format: ..., #(tactic_{35 }, confidence_{35 })#.
24 | 2306.15626#94 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 95 | Example Response 1: #(rw chebyshev.U, 0.9)#, #(simp only [chebyshev.U], 0.9)#, #(simp_rw chebyshev.U, 0.8)#, #(apply aeval_add, 0.75)#, #(apply aeval_mul, 0.75)#, #(apply aeval_sub, 0.75)#, #(apply aeval_X, 0.7)#, #(ap- ply aeval_C, 0.7)#, #(apply aeval_pow, 0.7)#, #(rw ←aeval_pow, 0.65)#, #(rw Ë2190aeval_mul, 0.65)#, #(rw ←aeval_add, 0.65)#, #(rw ←aeval_sub, 0.65)#, #(apply algebra_map_eq, 0.6)#, #(intro ae- val_algebra_map_eq, 0.6)#, #(apply eval_map, 0.55)#, #(rw eval_map, 0.55)#, #(apply aeval_eq_eval, 0.55)#, | 2306.15626#95 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 96 | eval_map, 0.55)#, #(rw eval_map, 0.55)#, #(apply aeval_eq_eval, 0.55)#, #(rw_for_target aeval_eq_eval, 0.5)#, #(apply aeval_eq_evalâ, 0.5)#, #(rw_for_target aeval_eq_evalâ, 0.5)#, #(apply aeval_commutes, 0.45)#, #(rw aeval_commutes, 0.45)#, #(ap- ply aeval_eq_of_is_scalar_tower, 0.4)#, #(rw is_scalar_tower.aeval_apply, 0.4)#, #(try_for 100 exact_mod_cast chebyshev.U_eq_refl, 0.35)#, #(simp * at *, 0.3)#, #(simp int.coe_nat_ring_hom, 0.25)#, #(apply cheby- shev.U_is_polynomial, 0.2)#, #(rw_algebra_map_eq_of_ring_hom, 0.15)#, #(unfold chebyshev.U, | 2306.15626#96 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 98 | Example Prompt 2: You are an expert in Lean3 theorem proofs. We are trying to solve the Lean3 theorem âeuler_sine.integral_cos_pow_posâ from the math- lib file âmathlib/src/analysis/special_functions/trigonometric /euler_sine_prod.leanâ. The current tactic state is: ân: NFO< f @: R) inO..7 / 2, cos x 7 nâ. Suggest exactly 35 unique tactics to progress in solving âeuler_sine.integral_cos_pow_posâ, along with their confidence levels as a float between 0 and 1. Rank them in or- der of effectiveness. Present the tactics and their confidence levels as comma-separated tuples in this format: #(tactic_1, confidence_1)#, #(tactic_2, confidence_2)#, ..., #(tactic_35, confidence_35)#. | 2306.15626#98 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 99 | Example Response 2: #(exact integral_pos_of_antimono_increasing_on cos.pow_antimono_ in- creasing_on_cos_even_nnreal, 0.99)#, #(apply integral_pos_of_antimono_ increasing_on, 0.95)#, #(interval_integrable_integral_pos, 0.90)#, #(ex- act integral_pos_pre, 0.85)#, #(interval_integ_timedout, 0.80)#, #(in- terval_integrable_pos, 0.75)#, #(exact integral_pos_of_nonneg_dint_val, 0.70)#, #(exact interval_integrable_pos_left, 0.65)#, #(exact inter- val_ integrable_pos_right, 0.60)#, #(exact integral_pos_of_nonneg_f, 0.55)#, #(rw integral_on_interval, 0.50)#, #(rw interval_integral_pos_ of_supporting, 0.45)#, #(rw integral_pos_of_interval_integrable_pos, | 2306.15626#99 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 100 | interval_integral_pos_ of_supporting, 0.45)#, #(rw integral_pos_of_interval_integrable_pos, 0.40)#, #(apply integral_ pos_of_antimono_increasing_on_cos_even_nnreal, 0.35)#, #(norm_cast apply_pos, 0.30)#, #(apply_pos_of_nonneg_pos, 0.25)#, #(rw interval_integral_pos_of_antimono, 0.20)#, #(use cos_slow_mono_on_pos, 0.15)#, #(apply cos_slow_mono_on_pos, 0.10)#, #(ex- act integral_pos_pre_cos_pow_even, 0.09)#, #(norm_cast apply_nonneg_pos, 0.08)#, #(rw integral_pos_of_ monotone_integrable, 0.07)#, #(use cos_pos_constant_sequence, 0.06)#, #(apply cos_pos_constant_sequence, 0.05)#, #(use integral_pos_of_ascending_sequence, 0.04)#, #(apply | 2306.15626#100 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 102 | 25
Data contamination is possible. Our GPT-4 experiments were performed in 2023, but many theorems and proofs in the dataset have been publicly available on GitHub before GPT-4âs data cutoff date (September 2021).
# Justifications for Not Comparing with Existing LLM-Based Provers
In Table 2, we do not empirically compare ReProver with any existing LLM-based prover. Unfor- tunately, such a comparison is infeasible. Provers targeting different proof assistants are generally not comparable, so we focus the discussion on the three existing provers in Lean [16, 17, 19]. Most importantly, they are impossible to reproduce with reasonable effort, due to private code and pretrain- ing data. Therefore, the only potential comparison is to evaluate ReProver under their experimental settings and compare with the numbers reported in their papers. However, that is also impractical for numerous reasons:
⢠The data is different. All existing methods used an outdated version of mathlib more than two years ago. We cannot use LeanDojo to extract data from this version. As mentioned in Sec. 4, LeanDojo only supports repos released after March 24, 2022. Also, we cannot use their dataset directly, since it does not contain premise information required by ReProver. | 2306.15626#102 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 103 | Lample et al. [17] trained on a synthetic dataset named Equations, which is not publicly available. ⢠All existing methods co-train the tactic generator on auxiliary tasks from the PACT dataset [16]. Co-training increases the data/compute requirements by an order of magnitude, which cannot be afforded by us (or probably most academic labs). All existing methods were developed by researchers in the industry.
Polu et al. [19] and Lample et al. [17] further finetuned their models on new proofs collected through online interaction with Lean, whereas our method is only trained on human-written proofs. ⢠The tool for interacting with Lean may impact the performance. Han et al. [16] and Polu et al. [19] used lean-gym, which has severe limitations (Appendix A.2). Lample et al. [17] developed their own private tool, which is not publicly available.
Most of these difficulties are due to the private nature of existing methods. By releasing our code and models, we take a major step in establishing accessible baselines for future work to build upon.
# C.4 Evaluation on MiniF2F and ProofNet | 2306.15626#103 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 104 | # C.4 Evaluation on MiniF2F and ProofNet
We evaluate our ReProver model on MiniF2F [28] and ProofNet [29] (Sec. 6) to test its capability in proving theorems outside its training data distribution. We use the same hyperparameters and evaluation setup as the previous experiments (Appendix C.1).
MiniF2F. We use the commit 5271ddec788677c815cf818a06f368ef6498a106 of Metaâs version of MiniF2F [17]. ReProver achieves a Pass@1 of 26.5% on the test set, which is competitive with state- of-the-art methods without reinforcement learning (25.9% in Polu et al. [19]). Moreover, ReProver can prove 33 theorems that currently do not have Lean proofs (examples in Fig. B). For the complete list of 33 new proofs, please see our pull request to MiniF2F. | 2306.15626#104 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 105 | There are caveats about quantitatively comparing ReProver with existing methods on MiniF2F. Many difficulties in Appendix C.3 still apply, e.g., different tools for interacting with Lean may impact the performance. Also, MiniF2F is a test-only dataset without training theorems, and existing methods focus on reinforcement learning (RL) to learn from proofs collected via online interaction with the proof assistant [17, 19]. In contrast, ReProver is trained via supervised learning on a static dataset, so we only compare with the non-RL baseline in existing methods (Polu et al. [19] achieves a Pass@1 of 25.9% without RL and 29.6% with RL). Furthermore, we do not compare with Lample et al. [17] due to differences in the evaluation metric. They use Pass@64, which requires running the prover on each theorem 64 times. We use Pass@1, and it already takes one day for a single evaluation on MiniF2Fâs test set. Therefore, evaluating Pass@64 would be too computationally expensive for the resources we have access to. Finally, MiniF2F is available in multiple proof assistants [18, 69, 70]. Results across different proof assistants are not comparable, so we only compare with existing work in Lean. | 2306.15626#105 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 106 | ProofNet. We use the commit e8645aa830ce17c33a8b8482a8195f0f97d6a74a of ProofNet. Re- Prover can prove 48 out of 349 theorems, achieving a Pass@1 of 13.8%, which is the first reported
26
mathd_numbertheory_237 : }k finset.range 101), k rw [finset.sum_range_succ'], norm_num [finset.sum_range_succ]l, mathd_numbertheory_175 : 2°2010) % 10 = 4 := norm_num [pow_succ], mathd_numbertheory_293 11120 * 100 +10 *n+7): contrapose! hi, norm_num [hi], dec_trivial!, mathd_algebra_616 fg:R-R he : V x, f hi: VX, 9 f 1) =1: â3 4+2*x+1 1 x x - simp only [he, hi, pow_onel, ring,
Figure B: Examples of new proofs discovered by ReProver on MiniF2F [28]. | 2306.15626#106 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 107 | Figure B: Examples of new proofs discovered by ReProver on MiniF2F [28].
theorem proving result on ProofNet. Moreover, 39 out of the 48 proved theorems do not have existing Lean proofs (examples in Fig. C), and 3 of them can only be proved with the help of premise retrieval (Fig. D). We have contributed the 39 new proofs to ProofNet, which helped them reveal and fix problems in the formalization of 7 theorems (details in our pull request).
# D LeanDojo for Lean 4
Lean 3 and Lean 4 are two incompatible major versions of Lean,12 and both are widely used. Lean 3 was the latest stable version until recently (June 2023). Also, Lean 3 and Lean 4 have separate versions of mathlib. The Lean/mathlib community has recently finished porting theorems and proofs from mathlib 3 to mathlib 4 [100]. Therefore, Lean 3 will gradually become deprecated, and future Lean projects will be using Lean 4. Therefore, it is important for LeanDojo to support Lean 4.
Since Lean 4 is relatively new, we are not aware of any existing work on learning-based theorem proving in Lean 4. Furthermore, no existing tool is available for extracting data from Lean 4. LeanDojo fills in this gap and fully supports Lean 4. Given any repo in Lean 4, LeanDojo can extract | 2306.15626#107 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 109 | theorem exercise_2_3_2 {G : Typex} [group G] (a b : G) Jdg:G,b*xa=gx*xaxb* gt i= begin exact (b, by simp), end theorem exercise_11_2_13 (a b : Z) (of_int a : gaussian_int) | of_int b >a | b := begin contrapose, simp, end theorem exercise_1_1_17 {G : Typex} [group G] {x : G} {n : N} (hxn: order_of x =n) : xP =x (n- 1: Z) i= begin rw zpow_sub_one, simp, rw [+ hxn, pow_order_of_eq_one]l, end theorem exercise_3_1_22b {G : Type*} [group G] (I : Typex) (H_ : I + subgroup G) (hH : V i: I, subgroup.normal (H i)) subgroup.normal (n (i : I), H i):= begin rw infi, rw «set. image_univ, rw Inf_image, simp [hH], haveI := A i, (H | 2306.15626#109 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 111 | Figure C: Examples of new proofs discovered by ReProver on ProofNet [29].
data, including file dependencies, ASTs, proof states, tactics, and premise information. In addition, it enables the model to interact with Lean 4 through tactics, in the same way as Lean 3 (Sec. 4).
Similar to constructing the Lean 3 version of LeanDojo Benchmark, we extract data from the commit 3ce43c18f614b76e161f911b75a3e1ef641620ff of mathlib4 released on October 21, 2023. The resulting dataset is named LeanDojo Benchmark 4. It is released under the CC BY 2.0 license and hosted on zenodo.org with DOI â10.5281/zenodo.8040109â. LeanDojo Benchmark 4 consists of 102,514 theorems/proofs, 213,067 tactics, and 152,695 premises. We use 2,000 theorems for
28 | 2306.15626#111 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 112 | 28
exercise_13_6_10 {K : ed field K fintype K* fl (x : K*), x = -1 exact finite_field.prod_univ_units_id_eq_neg_one, exercise_1_17 n:N x y : euclidean_space R (finn : Ix + yll42 + Ix - yi*2 = 2xllxlI*2 + 2llyll*2 := rw [norm_add_sq_real, m_sub_pow_two_real], ring, exercise_2_25 {K : * metric_space K compact_space K J (B : set (set K)), set.countable B A is_topological_basis B := reases exists_countable_basis K B, hBc, hB), exact (B, hBc, hB.2),
Figure D: Three new proofs discovered by ReProver on ProofNet [29] that cannot be found by a baseline without premise retrieval. All of the three proofs rely on premises: âfinite_field.prod_univ_units_id_eq_neg_oneâ | 2306.15626#112 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 113 | # , ânorm_add_sq_realâ, ânorm_sub_pow_two_realâ, and âexists_countable_basisâ.
validation, 2,000 theorems for testing, and the rest for training. LeanDojo Benchmark 4 also has two different data splits: random and novel_premises. We use LeanDojo Benchmark 4 to train and evaluate our method. The model architectures and experimental details are the same as those in Sec. 6. Results on premise selection are in Table B, and results on theorem proving are in Table C.
Table B: Premise selection testing performance on LeanDojo Benchmark 4 (Lean 3 results in Table 1). We train and evaluate two models independently using different data splits (random and novel_premises). R@k is the recall for the top k retrieved premises, and MRR is the mean reciprocal rank metric.
random novel_premises 12.8 34.7 0.29 9.8 32.1
# R@1 R@10 MRR R@1 R@10 MRR
Table C: Theorem proving Pass@1 (%) on the testing data of LeanDojo Benchmark 4 (Lean 3 results in Table 2).
Method ReProver W/o retrieval random 48.6 44.5
# E ChatGPT Plugin for Theorem Proving | 2306.15626#113 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 114 | Method ReProver W/o retrieval random 48.6 44.5
# E ChatGPT Plugin for Theorem Proving
LeanDojo provides a general tool for interacting with Lean programmatically. As a demo of how it might bridge LLMs and theorem proving, we build a ChatGPT plugin [101] enabling ChatGPT to prove theorems by interacting with Lean through LeanDojo. Plugin developers can wrap any software
29
as a web service and describe its APIs to ChatGPT. Then, ChatGPT can automatically call the APIs and incorporate the results into the response to the user. Below is a summary of our API description corresponding to the interface in Sec. 4.
# Title : Lean | 2306.15626#114 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 115 | # Title : Lean
Description : Plugin for proving user - specified theorems automatically by interacting with Lean . The user enters information of how to find a theorem ( e . g . , theorem name and file path ) . Based on the user â s input , ChatGPT first initializes the proof search with the given theorem as the initial state . Then , ChatGPT will first explain the choice for the next tactic step using LaTeX and run that tactic step to the state . If the current state is not promising , ChatGPT can backtrack to previous states by decrementing the " state_id " parameter . If applying tactics to the current state specified by the " state_id " parameter returns an error message , ChatGPT should explain the error , and if repetitive errors occur , ChatGPT should decrement the " state_id " parameter and try a different approach on a previous state . The theorem is successfully proved if there
are no unsolved goals in the current state .
# Endpoints :
i ni ti a li ze _p r oo f_ se a rc h : Given the theorem name and file path of a Lean theorem , initialize the proof search . The response includes the initial state and its state ID .
# Args :
theorem_name ( string ) : The name of the target theorem
to prove .
theorem_file_path ( string ) : The file path of the target theorem . | 2306.15626#115 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 116 | # Args :
theorem_name ( string ) : The name of the target theorem
to prove .
theorem_file_path ( string ) : The file path of the target theorem .
run_tactic : Run a tactic on a state ( specified by its state ID ) , assuming the proof search has been initialized and some state is available . The response is either the next state and its state ID or an error message , in which ChatGPT should explain the error and consider decrementing the " state_id ". Args : state_id ( string ) : The ID of the state on which to run the tactic . tactic ( string ) : The tactic to run on a state ( specified by its state ID ) , assuming the proof search has been initialized .
After exposing the APIs to ChatGPT, we can ask it to prove theorems by specifying the theoremâs name and path in any public Lean repo on GitHub. Fig. EâL show an example with the GPT-3.5 version of ChatGPT. And Fig. MâO are the same example with the GPT-4 version. The captions provide detailed step-by-step explanations. | 2306.15626#116 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 117 | We highlight a few key strengths of ChatGPT observed in multiple examples we evaluated. First, unlike specialized methods for theorem proving (this paper and its prior works), ChatGPT interleaved informal mathematics with formal proof steps. This resembles how humans interact with proof assistants and opens up new avenues for integrating natural language and formal theorem proving. Second, ChatGPT demonstrated impressive capability in explaining error messages from Lean that are quite opaque even to humans. It was able to incorporate the error message to refine its proof strategy. Last, ChatGPTâs behavior is more steerable than specialized provers. In Fig. E, we simply gave it the theorem to prove, but we could also provide more detailed instructions. For example, we
30
could say: âPlease describe a high-level proof plan before trying any tactic.â This kind of steerability enables future research on prompt engineering for theorem proving, and we have already seen initial benefits in an ongoing work named Sagredo.13 | 2306.15626#117 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 118 | However, these strengths by no means imply ChatGPT can already solve theorem proving. In fact, it failed to find a proof for most theorems we tried. Hallucination was common. In Fig. L, ChatGPT falsely asserted the theorem was proved, while we knew it was not, by looking at LeanDojoâs response. This demonstrates the value of theorem proving as a rigorous benchmark for addressing LLMsâ hallucination problem. Another key limitation of ChatGPT was its inability to search systematically in a large space. We frequently found it stuck to an unpromising path when the correct solution could be found by backtracking and exploring alternative paths. This behavior is consistent with the general observation that LLMs are weak at search and planning. Addressing this weakness is an active area of research [102]. | 2306.15626#118 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 119 | We emphasize a few caveats about our study of theorem proving with ChatGPT. First, data con- tamination is likely. Many theorems we evaluated have been publicly available on GitHub before ChatGPTâs data cutoff date. Therefore, ChatGPT may have seen them in training. Second, our study is exploratory. A more detailed and quantitative study is needed to characterize ChatGPTâs capability in theorem proving. Such a study with ChatGPT plugins is challenging, as plugins currently only support interaction through the browser. Also, OpenAI has taken measures to block automated access by bots. Using humans may be an option, but that is beyond the scope of this paper. | 2306.15626#119 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 120 | | want you to prove a theorem in Lean. The theorem's name is *hello_world", and it is defined in the file * src/example.leanâ in âhttps://github.com/yangky11/lean-example *. Used LeanDojo 4% LOCALHOST REQUEST TO LEANDOJO ⬠theorem nam 1lo_world", theorem file_path" rc/example. lean" RESPONSE FROM LEANDOJO The theorem âhello_world~ states that for three arbitrary complex numbers ~a*, ~b~,and *ce*,thesum âa + b + c* is equaltothesum âa + ¢ + b*.In other words, it asserts the commutativity of addition for complex numbers. The goal is to prove this statement: lean CE) Copy code abe: C⬠FKatb+c=at+ct+b
Figure E: (ChatGPT-3.5, 1/8) After receiving the theorem to prove, ChatGPT first called âinitializeâ, which returned the initial state. Then it tried to interpret the theorem in natural language. Note that it made a mistake here. The theorem was about natural numbers (N), not complex numbers (C).
# 13https://www.youtube.com/watch?v=CEwRMT0GpKo
31 | 2306.15626#120 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 121 | # 13https://www.youtube.com/watch?v=CEwRMT0GpKo
31
To prove this, we can use the fact that addition is commutative for real numbers, and complex numbers are composed of real numbers. Let's start by using the *xw* (rewrite) tactic to apply the commutativity of addition to the middle term ~b + c*. Used LeanDojo 4 LOCALHOST REQUEST TO LEANDOJO It looks like the zwâ tactic failed to apply the commutativity of addition to the middle term *b + c°. The error message indicates that the tactic did not find an instance of the pattern ~b + câ inthe target expression. The issue is that the ârwâ tactic is not able to match the pattern *b + ¢* inthe expression âa + b + c* because addition is associative, and Lean is treating the expression as â(a + b) + c. | 2306.15626#121 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 122 | Figure F: (ChatGPT-3.5, 2/8) ChatGPT tried to rewrite the goal using the lemma âb + c = c + bâ. This was a reasonable but incorrect move. After receiving the error message from Lean, ChatGPT explained the error in natural language. Here the explanation is quite accurate, which is impressive given that the error message looks opaque to anyone not familiar with Lean.
# F Limitations and Future Work
Our work is one step toward unlocking the potential of LLMs for generating verifiable formal proofs, and we see abundant space for future exploration. A learning-based prover is a complex system consisting of multiple components: data extraction, interaction with proof assistants, model training, and proof search. While navigating the design space spanned by various components, we err on the side of simplicity and efficiency, instead of pushing performance to the limit. This helps us deliver a reliable, open, and accessible system, laying the foundation for further research. There are many directions in which the system can be improved, and we discuss a few of them here.14 | 2306.15626#122 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 123 | Stronger LLMs. Our backbone model, ByT5 [44], was published in 2021 and has 299M parameters, which is not very large by todayâs standard. Recently, there have been a plethora of open-source LLMs demonstrating strong capabilities in writing code, e.g., CodeGen [103], StarCoder [94], and CodeGeeX [104]. We are excited to see how they might impact theorem proving and, more generally, how far we can go by pushing the limit of the model/data scale.
ByT5âs tokenizer-free nature helps us sidestep the difficulty with pretrained tokenizers that may not work well for Leanâs Unicode-rich code. However, treating texts as raw bytes makes the sequence length much longer than necessary. Long sequences harm efficiency, as Transformers scale quadratically w.r.t. the sequence length, which may become a bigger problem when we further scale up the model. To solve the issue, it might be helpful to pretrain a customized tokenizer or adopt more advanced tokenizer-free models such as MegaByte [105]. | 2306.15626#123 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 125 | To address this, we can use the *ring* tactic, which is a powerful tactic for solving equalities involving basic arithmetic operations. Let's try using the ~ ringâ tactic to prove the goal. Used LeanDojo 4% LOCALHOST REQUEST TO LEANDOJO "ring" RESPONSE FROM LEANDOJO "C'error': 'gen_ ce e : pos=none âxingâ" ta False)" It seems that the *ring* tactic is not available in the current context, as indicated by the error message "parse_itactic failed on âringâ¢." In that case, let's take a different approach. We can use the âapplyâ tactic to apply the commutativity of addition for complex numbers, which is given by the lemma ~add_comm =: V (ab: C), a+ b =b + aâ.Let'susethe âapplyâ tactic to apply this lemma to the subterm *b + e° inthe goal. Used LeanDojo 4% LOCALHOST REQUEST TO LEANDOJO "apply (add_comm b c)" RESPONSE FROM LEANDOJO "{'error': 'gen_tac_and_capture_res 2 some (1, 2)) failed to | 2306.15626#125 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 127 | Figure G: (ChatGPT-3.5, 3/8) Then it tried to prove the theorem using âringâ. This was another good move. The ring tactic can prove this theorem, but Lean could not find it since it was not imported into the current file. Again, ChatGPT was able to interpret the error message correctly and concluded that ring was not available. Next, it tried another tactic but failed again.
and E) cannot solve theorem proving out of the box and are currently far behind finetuned models. However, the way we prompt these models is quite naive, and better strategies, such as Tree of Thoughts [102], may lead to further improvements. We consider theorem proving as a promising task for studying LLMsâ capabilities in planning and search.
Improving Premise Retrieval. ReProver uses DPR [26] to retrieve premises and fuses them with the current proof state by concatenation. This architecture is simple and effective but does not scale to a large number of retrieved premises. With a length limit of 2,300 tokens, we can fit only 10â15 premises into the input of the tactic generator. To mitigate the problem, we may need an architecture that fuses the retrieved premises in the hidden space, e.g., Fusion-in-Decoder [106].
33 | 2306.15626#127 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15626 | 128 | To resolve this, let's use the âapply_fun* tactic, which allows us to apply a function to both sides of an equality. We can use âapply_fun* to add âa* to both sides of the lemma *b + ¢ = ¢ + bâ, which will give us the desired goal. Let's try this approach. Used LeanDoj "('error! tac_and_ failed: pos=none msg="parse_itactic failed on âapply_fun ) 9 add_comm b c*" ta state="a bc : N\nt a+b +c¢=a+c+b"', 'proof finished': False}" It seems that the âapply_fun* tactic is not available in the current context, as indicated by the error message "parse_itactic failed on âapply_fun (A x, a + x) at * using add_comm b câ¢." Given the limitations of the available tactics, we can take a more direct approach by using the âhave ~ tactic to introduce an intermediate equality. Specifically, we can introduce the equality âh : b + c = ¢ + bâ using the âadd_comm⢠lemma, and then use this equality to rewrite the goal. Let's try this approach. Used | 2306.15626#128 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.