doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.15195 | 24 | Table 1: Comparing different forms of CoTs. We train three toy models of Shikra-7B (without using ad- ditional datasets) on the CLEVR dataset. Q, A, C, and CPoint denote the Question, ï¬nal Answer, Chain of thoughts, and Chain of thoughts with Pointing.
QâA QâCA QâCPointA 88.07 80.68 93.97
have GPT-4 rewrite it in rich language, expanding it into hundreds of variations to convey the same meaning. During training, we can randomly choose from them. We provide details on some generated task templates in the Appendix B.
# 5.4 Tuning details | 2306.15195#24 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 24 | Our analysis of the results in Table 2 revealed several key findings. Firstly, we observed that generative retrieval
Methods w/o learning-to-rank w/ rank loss 1 w/o generation loss w/o rank loss w/o rank loss 1 w/o rank loss 2 LTRGR Natural Questions @5 @20 @100 86.7 78.3 65.8 78.7 69.4 56.1 84.4 76.1 63.9 86.5 78.6 65.8 87.0 80.8 68.2 86.7 79.8 67.9 87.1 80.3 68.8
Table 3: Ablation study of LTRGR with different losses in the learning-to-rank training phase. âw/o learning-to-rankâ refers to the basic generative retrieval model, MINDER, without the learning-to-rank training. | 2306.15222#24 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 24 | In general, we observed a consistent trend of our models achieving better perplexity with longer context windows. This indicates our models can effectively make use of the longer context windows to better predict next tokens in language modeling tasks. Moreover, we found this trend extends to 32768 window size without diminishing on the PG19 dataset for LLaMA 7B and 13B models. This indicates that our method may enable extension to even longer context windows.
In contrast, we observed that models extended via the direct ï¬ne-tuning method has shown regres- sion (up to +0.48) or minor improvement (up to -0.12) on the perplexity at longer context windows. This indicates that models extended this way have limited capability of making use of context win- dows longer than their pre-trained settings.
We saw a minor degradation of the perplexity on the original context window of 2048 for our ex- tended models in some cases. For example, on the Proof-pile dataset, we saw a degradation ranging from 0.01 to 0.05 across all models with extended with Position Interpolation. A small degradation of performance within original evaluation context window is expected since Position Interpolation forces position encodings in original context window to reside in a much narrower region, which
6
may negatively affect the language modelâs performance. We present more benchmark results on the original context window size in Section 3.4. | 2306.15595#24 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 24 | ⢠Premises: For each premise, such as mod_self in Fig. 2, LeanDojo records where it is defined (location in data/nat/lemma.lean) and where it is used (locations across many files). In addition, premises have unique fully qualified names (e.g., nat.mod_self) but are often used by ambiguous short names (mod_self), relying on Lean to perform name resolution. LeanDojo is capable of recording their full names.
Lean has basic support for exporting dependencies, ASTs, states, and tactics. However, it cannot resolve the premisesâ full names and locate their definitions. Therefore, we modify Lean to record this information (details in Appendix A.1). The modified Lean is used only for data extraction but not for evaluation, so we do not risk accidentally breaking Leanâs logical soundness. | 2306.15626#24 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 25 | # 5.4 Tuning details
Shikra is trained in two stages. In the ï¬rst stage, we train it on the reorganized VL dataset (Section 5.3.1) for 100,000 steps (around 1.5 epoch); In the second stage, we raise the sampling ratio to 50% on LLaVA-Instruct-150K (Liu et al., 2023a) and our generated RD data (Section 5.3.2). In both stages, we freeze the visual encoder and tune all parame- ters in LLM. We adopt AdamW (Loshchilov and Hutter, 2019) as the optimizer and cosine annealing scheduler (Loshchilov and Hutter, 2017) as learn- ing rate scheduler with an initial learning rate of 2e-5 and global batch size of 64. All training runs on 8 NVIDIA A100 GPUs. It takes around 100h for stage one training and 20h for stage two.
# 6 Experiment and Analysis
# 6.1 Grounding CoT or verbal CoT? | 2306.15195#25 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 25 | methods perform worse in the search scenario compared to the QA datasets. Specifically, SEAL, NCI, and DSI under- performed BM25, while MINDER and DSI (T5-large) only slightly outperformed BM25. This is likely due to the fact that the passages in MSMARCO are sourced from the web, and are therefore of lower quality and typically lack impor- tant metadata such as titles. Secondly, we found that LTRGR achieved the best performance and outperformed all baselines significantly. LTRGR surpassed the second-best approach, DSI (scaling up), by 5.7 points in terms of MRR@10, de- spite DSI using the larger T5-Large backbone compared to BART-Large. Finally, we observed that the learning-to-rank paradigm significantly improves existing generative retrieval methods in the search scenario. Specifically, LTRGR im- proved MINDER by 10.7 points and 6.9 points in terms of Recall@5 and MRR@10, respectively. These results provide strong evidence of the effectiveness of LTRGR, which only requires an additional training step on MINDER. | 2306.15222#25 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 25 | 6
may negatively affect the language modelâs performance. We present more benchmark results on the original context window size in Section 3.4.
In Table 3 we report the relationship between perplexity and the number of ï¬ne-tuning steps for LLaMA 7B model extending to 8192 and 16384 context window sizes using Position Interpolation evaluated on the PG19 dataset. We can see without ï¬ne-tuning (at step 0) the model can exhibit certain language modeling capability, as indicated by < 20 perplexity for extending to 8192 context window (in contrast, the direct extrapolation method leads to > 103 perplexity). With ï¬ne-tuning, we observed that the perplexity improves quickly. At 200 steps the models surpassed the original modelâs perplexity on 2048 context window size, indicating the models gaining ability of effectively using sequences longer than the pre-training settings for language modeling. At 1000 steps, we can see the models have improved steadily and achieve a signiï¬cantly better perplexity. | 2306.15595#25 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 25 | LeanDojo Benchmark. We construct a benchmark for premise selection and theorem proving, named LeanDojo Benchmark. The data is extracted from mathlib,4 Leanâs centralized math library covering diverse topics such as analysis, algebra, and geometry.5 LeanDojo Benchmark is one of the largest math-focused theorem proving datasets, consisting of 98,734 theorems from 3,384 Lean files. Unlike existing datasets in Lean [16], LeanDojo Benchmark also contains the definitions of 130,262 premises, including not only theorems but also other definitions that can be used as premises (e.g., gcd in Fig. 2. Furthermore, the dataset has 217,776 tactics, 129,243 of them with at least one premise. The average number of premises is 2.13 among tactics with premises. Appendix B contains additional information on data format, datasheet [90], hosting, and licensing. | 2306.15626#25 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 26 | # 6 Experiment and Analysis
# 6.1 Grounding CoT or verbal CoT?
The process of providing reasoning before giving an answer is called Chain of the thoughts (CoT), which provides good explanatory during model judgments. However, CoT often suffer from hal- lucinations (Zhang et al., 2023b), which often do not improve the performance of the ï¬nal answer. Current MLLMs are also suffer from serious visual hallucination (Li et al., 2023c). In this section, we investigate whether CoT with position annotations can reduce hallucinations and improve model per- formance. In this paper, we refer to this type of CoT as Grounding CoT (GCoT). We train our Shikra-7B (without pre-training) on CLEVR (Johnson et al., 2017) in three settings: 1) Only use Question and Answer (QâA); 2) Use Question, CoT, and answer (QâCA); 3) Use GCoT with Center Point anno- tation and answer (QâCPointA). We record they | 2306.15195#26 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 26 | Ablation Study The LTRGR model is trained by leveraging the MINDER model and minimizing the loss function defined in Eq. 5. This loss function consists of two margin-based losses and one generation loss. To shed light on the role of the learning- to-rank objective and the impact of the margin-based losses, we conducted experiments where we removed one or more terms from the loss function. Specifically, we investigated the following scenarios: ⢠âw/o generation lossâ: We removed the generation loss term (Lgen) from the loss function, which means that we trained the autoregressive model solely based on the rank loss.
⢠âw/o rank lossâ: We removed both margin-based losses (Lrank1 and Lrank2) from the loss function, which means that we trained the autoregressive model solely based on the generation loss, following a common generative retrieval approach.
⢠âw/o rank loss 1â and âw/o rank loss 2â: We removed one of the margin-based losses (Lrank1 or Lrank2) from the loss function, respectively.
Our experiments aimed to answer the following questions: Does the performance improvement of the LTRGR model
Natural Questions @5 @20 @100 86.3 76.2 61.3 86.4 78.1 SEAL-LTR 63.7 | 2306.15222#26 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 26 | Model Size Context Window Method 7B 7B 2048 8192 None FT Evaluation Context Window Size 2048 8192 32768 7.20 > 103 > 103 > 103 > 103 7.69 7.21 4096 16384 7.34 - - 7B 7B 7B 8192 16384 32768 PI PI PI 7.13 7.11 7.23 6.96 6.93 7.04 6.95 6.82 6.91 - 6.83 6.80 - - 6.77 13B 13B 2048 8192 None FT 6.59 6.56 - 6.57 - 6.69 - - - - 13B 13B 13B 8192 16384 32768 PI PI PI 6.55 6.56 6.54 6.42 6.42 6.40 6.42 6.31 6.28 - 6.32 6.18 - - 6.09 33B 33B 2048 8192 None FT 5.82 5.88 - 5.99 - 6.21 - - - - 33B 33B 8192 16384 PI PI 5.82 5.87 5.69 5.74 5.71 5.67 - 5.68 - - 65B 2048 None 5.49 - - - - 65B 8192 PI 5.42 5.32 5.37 - Table 1: Evaluation perplexity on PG19 dataset (Rae et al., | 2306.15595#26 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 26 | src/algebra/quaternion.lean lemma conj_mul : (a * b).conj = b.conj * a.conj := begin ext; simp; ring_exp end lemma conj_conj_mul : (a.conj * b).conj = b.conj * a := begin rw [conj_mul, conj_conj] end lemma conj_mul_conj : (a * b.conj).conj = b * a.conj := begin rw [conj_mul, conj_conj] end
Figure 3: Similar theorems/proofs are common. If splitting them randomly into training/testing, the model can prove testing theorems by memorization. | 2306.15626#26 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 27 | Table 2: Comparing different position representa- tions. We implement Shikra-7B in two different rep- resentation forms and train two toy models solely on RefCOCO, RefCOCO+/g, and Visual Genome for con- trollable comparison. Vocab. means to use extra vo- cabularies to represent coordinates, like (Chen et al., 2021; Wang et al., 2022b), and Numerical means to di- rectly use numerals in natural language to express coor- dinates.
Dataset Split Vocab. Numerical RefCOCO val 81.03 test-A 86.94 70.91 test-B 81.47 87.40 73.25 RefCOCO+ val 72.32 test-A 81.78 59.95 test-B 74.30 83.29 63.08 RefCOCOg val-u test-u 72.81 73.78 75.69 75.52 | 2306.15195#27 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 27 | - - - - 65B 8192 PI 5.42 5.32 5.37 - Table 1: Evaluation perplexity on PG19 dataset (Rae et al., 2020). FT: Direct Fine-tuning. PI: Position Interpolation. Model ï¬ne-tuned with PI shows progressively lower perplexity with longer context window, showing that PI can leverage long context well, while the perplexity of FT increases over longer window. Note that overall the perplexity is higher compared to Table 2 since PG19 has very different writing styles. | 2306.15595#27 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 27 | Figure 3: Similar theorems/proofs are common. If splitting them randomly into training/testing, the model can prove testing theorems by memorization.
LeanDojo Benchmark has 94,734/2,000/2,000 theorems for training/validation/testing. It features a challenging data split for testing the proverâs generalization in more realistic scenarios. Splitting theorems randomly can overestimate the proverâs performance, by allowing it to prove many theorems through memorization. In human-written Lean code, a common idiom is to have a block of similar theorems/proofs for slightly different properties of the same math concept. For example, in Fig. 3, the last two theorems not only look similar but have identical proofs. If one of them is in training, the model can easily prove the other one by memorization. This shortcut enables the model to prove seemingly nontrivial theorems, including those requiring premises to prove. | 2306.15626#27 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 28 | performance in Table 1. Using only CoT to train the model (QâCA) and requiring a reasoning pro- cess before the ï¬nal answer decreases performance compared to direct answering setting (QâA). In the QâCPointA setting, we ask the model to pro- vide CoT along with center points [xcenter, ycenter] for each mentioned object. Performance improved by 13 points compared to QâCA and 5.9 points compared to QâA, indicating that training with positional annotations suppresses visual hallucina- tion. This is a preliminary attempt at GCoT, and it is a promising direction worth exploring.
# 6.2 Location tokens or just numbers? | 2306.15195#28 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 28 | come from the learning-to-rank objective or from continuous training? Is it necessary to have two margin-based losses? What happens if we train the model only with the rank loss? We present the results of our ablation study in Table 3, which provide the following insights: (1) Removing the rank loss and training the model solely based on the generation loss does not significantly affect the performance. This ob- servation is reasonable since it is equivalent to increasing the training steps of a generative retrieval approach. This result confirms that the learning-to-rank objective is the primary source of performance improvement and validates the effec- tiveness of our proposed method. (2) Removing either Lrank1 or Lrank2 leads to a drop in the performance of LTRGR. On the one hand, having two rank losses allows the model to leverage a larger number of passages and benefits the rank learning. On the other hand, the two rank losses adopt dif- ferent sample mining strategies, ensuring the diversity of the passages in the loss. (3) Removing the generation loss is the only variant underperforming the original MINDER model. During our experiments, we observed that the model tends to fall into local minima and assign smaller scores to all | 2306.15222#28 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 28 | 3.3 MEASURING EFFECTIVE CONTEXT WINDOW SIZE THROUGH PASSKEY RETRIEVAL
We study the effective context window size, i.e. the maximum distance of a token can effectively attend to during inference, of our models after extension. To measure this, we follow a synthetic evaluation task of passkey retrieval proposed by Mohtashami & Jaggi (2023). In this task, the models are asked to recover a random passkey hidden in a long document. See Figure 3 for the format of the document.
Given a language model, we estimate the upper and lower bounds of effective context windows as follows. Suppose the random passkey is k tokens away from the end of the input. When a model persistently fails to retrieve the correct passkey value across several independent attempts, it suggests that the effective context window size of the model is less than k. Conversely, if a model consistently succeeds in retrieving the correct passkey value, we deduce that the effective context window size of the model is at least k. | 2306.15595#28 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 28 | To mitigate this issue, besides the random split, we create a challenging data split named novel_premises. It requires testing proofs to use at least one premise that has never been used in training. For example, the last two theorems in Fig. 3 both use the premise conj_mul. If one theorem is in the training set of the novel_premises split, the other one must also be in training.
4We use the commit 19c869efa56bbb8b500f2724c0b77261edbfa28c released on October 11, 2023. 5More details, statistics, and visualizations of mathlib can be found at https://leanprover-community. github.io/mathlib_stats.html.
6
Interacting with Lean. Another important function of LeanDojo is to interact with Lean program- matically. It turns Lean into a gym-like environment [22], in which the prover can observe the proof state, run tactics to change the state, and receive feedback on errors or on proof completion. This environment is indispensable for evaluating/deploying the prover or training it through RL.
Below is LeanDojoâs main interface for interacting with Lean through tactics. Lean also supports other proof styles not based on tactics. Although we only support tactic-style proofs, they are sufficiently general since any proof can be converted to a tactic-style proof.6 | 2306.15626#28 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 29 | # 6.2 Location tokens or just numbers?
For detect object in autoregressive model, several methods (Chen et al., 2021; Wang et al., 2022b) introduce extra vocabularies (e.g., <bin_0>, · · · , <bin_1000>) to represent coordinates for object detection in spatially discretized images, as de- scribed in Section 2.3. In contrast, Shikra rep- resents coordinates naturally and intuitively, us- ing numbers directly. Which form is better? We train two toy Shikra using two different repre- sentations with REC data, they performance is recorded in Table 2, where using numbers di- rectly achieves better results. Aside from perfor- mance, our simple-designed coordinate numerical representation makes the model more elegant with- out modifying vocabularies for localization tasks. Users can freely control the precision of numerical representation (number of digits after the decimal | 2306.15195#29 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 29 | only variant underperforming the original MINDER model. During our experiments, we observed that the model tends to fall into local minima and assign smaller scores to all pas- sages. This finding suggests the necessity of the generation loss in the learning-to-rank phase. (4) Overall, the current loss function is the best choice for the learning-to-rank phase. We also explore the list-wise rank loss in Section 4.7. | 2306.15222#29 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15626 | 29 | ⢠initialize(theorem): Given the theorem to prove, LeanDojo returns the initial state. A valid state is a string representing current proof goals and local contexts (see the nodes in Fig. 1 Top left). When there are multiple goals, their strings are concatenated.
⢠run_tac(state, tactic): Run a tactic on a given state and return the next state. The returned state will be an error state if the tactic execution is not successful, e.g., due to timeout or inapplicable tactic. If the input state is an error, the result can only be an error.
Building this environment is technically challenging, as Lean is designed for human users, not machines. LeanDojo is the first tool that can interact with Lean reliably. Existing tool [19] is limited: 21.1% of the ground truth proofs are misjudged as incorrect, due to issues with how they construct the proof environment, which distorts the reported performance and produces unreliable feedback when used in reinforcement learning. In contrast, LeanDojo reduces the number of misjudgments to 1.4%. Details are in Appendix A.2.
# 5 ReProver: Retrieval-Augmented Theorem Prover | 2306.15626#29 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 30 | Table 3: Results on standard REC task. Generalist VL model Generalist VL models can directly perform various vision-language tasks, including image captioning, VQA, REC, etc. Specialist models are those speciï¬cally designed for localization tasks (e.g., UNINEXT, Yan et al., 2023 and G-DINO, Liu et al., 2023b), or generalist pretraining models that have undergone multitask localization ï¬netuning (e.g., Yang et al., 2022) or single-task ï¬netuning (e.g., Wang et al., 2022b). We select the three current best performing models (Liu et al., 2023b; Yan et al., 2023; Wang et al., 2023a) as baselines. OFA-L* (Wang et al., 2022b) refers to the OFA-Large checkpoint without ï¬netuning. GRIT refexp is the ablation split (Lu et al., 2022). | 2306.15195#30 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 30 | # In-depth Analysis
Generalization of LTRGR. Our LTRGR builds on the gen- erative retrieval model MINDER and continues to train it using the loss function described in Eq. 5. A natural ques- tion arises: can LTRGR be generalized to other generative retrieval models? To answer this question, we replaced MIN- DER with SEAL as the basic model and performed the same learning-to-rank training. The results, presented in Table 4,
Rank loss Margin loss List-wise loss Natural Questions @5 @20 @100 87.1 80.3 68.8 86.3 78.5 65.4
Table 5: Performance comparison of LTRGR with the margin- based loss and the list-wise loss.
show that the proposed LTRGR framework can also improve the performance of SEAL. Specifically, the hits@5, 20, and 100 metrics improved by 3.6, 1.9, and 0.1 points, respectively. Interestingly, we observed that the improvement on hits@5 was larger than that on hits@100, which may be attributed to the optimization of the top ranking using Lrank1. | 2306.15222#30 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 30 | Model Size Context Window Method Evaluation Context Window Size 8192 2048 4096 16384 32768 7B 7B 2048 8192 None FT 2.77 2.85 - 2.74 - 2.73 - - - - 7B 7B 7B 8192 16384 32768 PI PI PI 2.79 2.79 2.82 2.57 2.57 2.59 2.39 2.37 2.39 - 2.25 2.24 - - 2.48 13B 13B 2048 8192 None FT 2.66 2.71 - 2.56 - 2.50 - - - - 13B 13B 13B 8192 16384 32768 PI PI PI 2.67 2.68 2.68 2.47 2.47 2.46 2.30 2.29 2.28 - 2.18 2.15 - - 2.35 33B 33B 2048 8192 None FT 2.49 2.56 - 2.48 - 2.47 - - - - 33B 33B 8192 16384 PI PI 2.50 2.53 2.32 2.34 2.18 2.18 - 2.07 - - 65B 2048 None 2.42 - - - - 65B 8192 PI 2.43 2.26 2.12 - Table 2: Evaluation perplexity on Arxiv Math Proof-pile dataset | 2306.15595#30 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 30 | # 5 ReProver: Retrieval-Augmented Theorem Prover
We develop the ReProver model that uses retrieval to select premises explicitly. At its core is a retrieval-augmented tactic generator (Fig. 1 Bottom). Given the current proof state, it retrieves a handful of potentially useful premises and generates a tactic conditioning on the concatenation of the state and retrieved premises. When proving theorems, the model generates multiple tactic candidates at each step, which are used in a standard best-first search algorithm to find proofs [16, 18, 19, 28].
Premise Retrieval. Our retriever is based on Dense Passage Retriever [26]. Given a state s as the query and a library of candidate premises P = {pi}N i=1, it retrieves a ranked list of m premises {pâ² i=1 from P. In DPR, s and pi are both raw texts but are embedded in a vector space, and we retrieve the top m premises maximizing the cosine similarity between the state and the premise. | 2306.15626#30 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 31 | Model type Model val RefCOCO test-A test-B val RefCOCO+ RefCOCOg test-u test-A test-B val-u GRIT refexp Generalist VL SOTAs (w/o ï¬netuning) GPV-2 OFA-L* Uniï¬ed-IO OFASys VisionLLM-H Shikra-7B Shikra-13B - 79.96 - - - 87.01 87.83 - 83.67 - 80.10 86.70 90.61 91.11 - 76.39 - - - 80.24 81.81 - 68.29 - - - 81.60 82.89 - 76.00 - - - 87.36 87.79 - 61.75 - - - 72.12 74.41 - 67.57 - - - 82.27 82.64 - 67.58 - - - 82.19 83.16 51.50 61.70 78.60 - - 69.34 69.03 Specialist SOTAs (Specialist/Finetuned) G-DINO-L UNINEXT-H ONE-PEACE 90.56 92.64 92.58 93.19 94.33 94.18 88.24 91.46 89.26 82.75 85.24 88.77 88.95 89.63 92.21 75.92 79.79 | 2306.15195#31 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 31 | List-wise loss. To facilitate generative retrieval learning to rank, we adopt a margin-based loss as the rank loss. By doing so, LTRGR effectively connects generative retrieval with the learning-to-rank paradigm, allowing for various types of rank loss to be applied. To examine the impact of different rank losses, we substitute the original margin-based loss with a list-wise loss known as infoNCE, which is formulated as follows:
eS (@Pp) es(GPp) + > es(GPn)* Lrank = âlog (6)
We randomly selected 19 negative passages from the passage rank list P and presented the results in Table 5. It was ob- served that LTRGR with the infoNCE loss performed worse than the model with the margin-based loss. There are two potential reasons: Firstly, we only trained the model for one epoch due to the increased training cost, which may have resulted in insufficient training. Secondly, the passage scores were not normalized, making them difficult to optimize. The results also indicate that more suitable list-wise learning methods should be developed in generative retrieval. | 2306.15222#31 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15626 | 31 | More formally, we have a function f parameterized by θ for embedding both the state and the premises into a h-dimensional vector space: f (s, θ), f (pi, θ) â Rh. We retrieve premises maximizing f (s, θ)T f (pi, θ)/(â¥f (s, θ)â¥2â¥f (pi, θ)â¥2). We choose f to be a Transformer encoder [2] followed by average pooling: f (·, θ) = AvgPool(Enc(·, θ)).
The retrieval is efficient. The premise embeddings f (pi, θ) can be pre-computed, and we only need one forward pass to compute f (s, θ). We do not rerank the retrieved premises as in Mag- nushammer [49], which is more costly since it requires a separate forward pass for each retrieved premise. | 2306.15626#31 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15222 | 32 | Inference speed. LTRGR simply adds an extra training step to existing generative models, without affecting infer- ence speed. The speed of inference is determined by the underlying generative retrieval model and the beam size. We conducted tests on LTRGR using a beam size of 15 on one V100 GPU with 32GB memory. On the NQ test set, LTRGR based on MINDER took approximately 135 minutes to com- plete the inference process, while LTRGR based on SEAL took only 115 minutes. Notably, SEALâs speed is comparable to that of the typical dense retriever, DPR, as reported in the work (Bevilacqua et al. 2022).
Margin analysis. To assess the impact of margin values on retrieval performance, we manually set margin values ranging from 100 to 500 in Eq. 4. The results are summarized in Figure 2(a). Our findings indicate that LTRGR with a margin of 100 performs worse than other variants, suggesting that a minimum margin value is necessary. As the margin value increases from 200 to 500, performance improves slightly but not significantly. While a larger margin can help the model better differentiate between positive and negative passages, it can also make the learning objective hard to reach.
λ analysis. In the loss function described by Equation 5, we use a weight λ to balance the contribution of the gener- ation loss Lgen and the rank loss Lrank. To determine the | 2306.15222#32 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 32 | Model Size Context Window 0 Number of ï¬ne-tuning steps 800 200 400 600 1000 7B 7B 8192 16384 16.10 112.13 7.12 7.05 7.10 6.93 7.02 6.88 6.99 6.84 6.95 6.83
Table 3: Evaluation perplexity on PG19 dataset (Rae et al., 2020) with respect to the number of ï¬ne-tuning steps using Position Interpolation.
8
where kiyax is defined as the maximum k such that, for all kâ < k, the model has a success rate of at least 20% on kâ.
We can see that models extended via Position Interpolation all successfully attain their desired ex- tension objectives in terms of effective context window sizes, indicating by the effective context window size reaching maximum ky,ax = Lâ, after merely fine-tuning for 200 steps, consistently across both 7B and 33B model sizes and up to 32768 context windows. In contrast, LLaMA models that are extended via direct fine-tuning only saw a minimal increase of the effective context win- dow size kmax from 2048 to 2560, even after fine-tuning for more than 10000 steps, with no clear indication of an acceleration in the increase of window size. | 2306.15595#32 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 32 | Similar to DPR, we train the retriever by minimizing a contrastive loss between positive premises and in-batch negative premises. Specifically, suppose we have a batch of b states. For each state, we sample a positive premise from the ground truth and n negative premises from P.7 They are called âin-batchâ negatives because they are shared by all states in the batchâEvery state is associated with all b · (n + 1) premises; at least 1 of them is positive. Let lij â {0, 1} denote whether a state-premise pair (si, pj) is positive. We minimize the mean squared loss:
b b-(n+1) « - Hsin) Fp8) £0) = 32 Oe fo Tyee F0s- 2M â
6Another common type of proofs is âterm-style proofsâ. Any term-style proof âXâ can always be converted into an equivalent tactic-style proof âexact Xâ, though such conversion may lead to unidiomatic proofs. 7When training the retriever, we ignore proof states followed by tactics without using any premise.
7 | 2306.15626#32 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 33 | Zhu et al. Hu et al. Lu et al. Lu et al.* Shikra 56.10 72.53 82.75 83.35 85.33
Table 5: Comparing pointQA capabilities on the LookTwice-QA (Mani et al., 2020), where the mod- els are asked to answer question based on the input point/box. Pronoun, Superclass (Super cls.), and Class indicate different levels of referential clarity in the question, e.g., âHow many of these [â
/fruits/apples] <obj>?" We use Shikra-13B and Accuracy (%) for eval- uation.
separator) without retraining vocabularies. How- ever, it also has drawbacks. Compared to using ex- tra vocabularies, numerical representation requires more tokens to represent coordinates, leading to in- creased computational costs when predicting dense objects. In this paper, we still prefer numerical representation, but future research can choose the appropriate method based on their pros and cons.
Type Point Box Model Pronoun Super cls. Class Mani et al. Shikra Mani et al. Shikra 56.5 70.0 60.2 70.3 59.1 70.2 59.8 71.4 62.8 71.8 61.4 72.3
# 6.3 Quantitative results on conventional tasks | 2306.15195#33 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 33 | Query What is prime rate in canada Title: Prime Rate in Canada Target passage (represented by three types of Body: a guideline interest rate used by banks on loans for their most creditworthy, best, or prime clients. The prime rate rises and falls with the ebb and flow of the Canadian economy, influenced significantly by the overnight rate, which is set by the Bank of Canada. Pseudo-queries: what is prime rate for loans || prime rate meaning || what is prime rate in canada || correspinding what is prime rate in canada, 342.22 Th t scores. Tne correct) prime Rate History, 300.95 identifiers that what is the prime rate in canada, 292.57 belong tothe | Canada Prime Rate, 270.51 target passage are | prime Rate, 236.16 colored in purple. | prime Rate is now, 232.79 identifiers) an . . prime rate definition canada || what is the prime interest rate in canada || prime rate definition || what is the prime rate || ...... Method Before learning to rank After learning to rank Predicted what is the current prime rate for canada, 387.91 | what is the prime interest rate in canada, 391.98 identifiers and the | What is the current prime rate in canada, 385.90 what is the current prime rate of interest, 306.94 what | 2306.15222#33 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 33 | Model Size Context Window Method 200 400 Fine-tuning steps 600 800 1000 10000 7B 33B 8192 8192 FT FT 1792 1792 2048 2048 2048 1792 2048 2048 2304 2304 2560 - 7B 7B 7B 33B 33B 8192 16384 32768 8192 16384 PI PI PI PI PI 8192 16384 32768 8192 16384 8192 16384 32768 8192 16384 8192 16384 18432 8192 16384 8192 16384 32768 8192 16384 8192 16384 32768 8192 16384 - - - - Table 4: Effective context window sizes after ï¬ne-tuning. FT: Direct ï¬ne-tuning. PI: Position Interpolation.
There is an important info hidden inside a lot of irrelevant text. it and memorize them. there. The grass is green. There and back again. The pass key is 12345. Remember it. The grass is green. There and back again. What is the pass key?
# Find
go.
12345 is the pass key.
The sky is blue. The sun is yellow.
Here we go.
(repeat Y times) The pass key is
Figure 3: Prompt format for passkey retrieval. We use the exact same prompt as proposed by Mohtashami & Jaggi (2023). Here the passkey 12345 is replaced with a random 5-digit numbers during test. | 2306.15595#33 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 33 | 7
Retrieving from Accessible Premises. We incorporate into DPR two insights tailored to premise selection. First, instead of retrieving from all premises in the math library, we restrict to premises accessible to the current theorem. They include premises defined in the same file before the theorem, as well as those imported from other files. We compute accessible premises for each theorem, relying on LeanDojoâs capability in program analysis (Sec. 4). Focusing on accessible premises makes P much smaller. LeanDojo Benchmark contains 130,262 premises in total, but the average number of accessible premises is only 33,160.
In-file Negative Examples. DPRâs performance depends critically on the quality of negative examples [91, 92]. In early experiments, we sampled all n negative premises randomly, and the model often mistakenly retrieved other premises from the same file as the positive one. Therefore, we propose a scheme that samples k in-file negatives and n â k random negatives for training.
Tactic Generation. As in Fig. 1 (Bottom), retrieved premises are concatenated with the state.8 Then an encoder-decoder Transformer, ByT5 [44], takes them as input and generates the tactic. The model is trained to minimize the cross entropy loss w.r.t. human-written tactics. | 2306.15626#33 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 34 | # 6.3 Quantitative results on conventional tasks
Our Shikra excels in Referential Dialogue, facil- itating seamless integration into a wide range of vision-language (VL) tasks, particularly those re- lated to positioning. Here, we present the quantita- tive results for these tasks.
To demonstrate the positioning capability of our model, we examine the REC task, in which mod- els are ask to ground the object described with an expression. As shown in Table 3, we compare our method with generalist VL models that per- form multiple tasks without ï¬netuning. We also compare our method with Specialist SOTAs, including localization specialist models and gener- alist/foundation models that perform speciï¬c ï¬ne- tunes on localization-related tasks. In this setting, we instruct Shikra to provide the coordinates of the objects referred to by the expression. For an exam- ple, we use âIâd like to know the exact coordinates of <expr> in the photo <image>.â, where <expr> represents the expression and <image> represents the input image. More instructions can be found in Appendix 9. The experimental results demon- strate that Shikra achieves promising performance compared to other generalist models. | 2306.15195#34 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 34 | interest rate in canada, 391.98 identifiers and the | What is the current prime rate in canada, 385.90 what is the current prime rate of interest, 306.94 what is the current prime rate in canada, 391.98 prime rates in canada, 391.98 what is the prime rate for canada, 385.90 what is prime rate in canada, 385.90 what is the current prime rate in canada, 385.90 Prime Rate in Canada, 372.01 what is the prime loan, 337.51 prime rate definition, 286.75 | 2306.15222#34 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 34 | # 3.4 BENCHMARKS ON ORIGINAL CONTEXT WINDOW SIZE
We evaluate the models extended by Position Interpolation on several standard benchmark tasks within the original context window size of 2048. The evaluation results are listed in Table 5. From the results, we saw that models extended to 8192 produce comparable results on the original bench- mark which is designed for a much smaller context window, with a degradation of up to 2% on the benchmark tasks, for both 7B and 33B model sizes. Models extended to longer context win- dows regressed more on the benchmarks, but still in reasonable ranges for most tasks. We also note that the choice of ï¬ne-tuning datasets does not seem to lead signiï¬cant difference in the benchmark performances, which may be due to the limited number of ï¬ne-tuning steps used in our method. The regression on benchmark tasks is consistent with our observation on perplexity regression in Section 3.2.
3.5 LONG DOCUMENT SUMMARIZATION
In this task, we evaluate our modelsâ performance on the long document summarization task. In particular, we consider the GovReport (Huang et al., 2021) dataset, which contains 17457 documents for training and 972 documents for evaluation. Each document comes with a human generated summary. We truncate all input documents to their ï¬rst 15000 tokens. | 2306.15595#34 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 34 | Training ReProver takes substantially less compute than prior methods (120 GPU hours vs. more than 1000 hours [16, 17]). All existing LLM-based provers pretrain on datasets specific to math and coding [14â20]. The pretraining is computationally expensive, and the datasets are kept private. In contrast, we choose to avoid domain-specific pretraining and build upon google/byt5-smallâa model checkpoint that is generic, publicly available, and relatively small (299M parameters vs. 837M [16] or 600M [17]). We could see further benefits from domain-specific pretraining, as in Minerva [57], or stronger LLMs like LLaMA [93] or StarCoder [94], but that is beyond our scope. In addition, our model is finetuned on human-written tactics only, without auxiliary data [16] or data collected through online interaction with Lean [17, 19]. These orthogonal directions are valuable but will significantly increase the methodâs complexity and compute requirements.
# 6 Experiments
We evaluate ReProver on LeanDojo Benchmark. It outperforms baselines on premise selection and theorem proving, demonstrating the promise of theorem proving with retrieval-augmented language models. Experimental details and hyperparameters are in Appendix C.1. | 2306.15626#34 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 35 | Correspondingly, to quantitatively evaluate our modelâs understanding of position inputs, we evalu- ated our model on two types PointQA datasets,
Table 6: Comparing generalist models on VQA and Image Captioning. For VQA, we evaluate SOTA generalist models and our Shikra-13B onVQAv2 (Antol et al., 2015) and OK-VQA (Marino et al., 2019) following the normalization rules. Here, we also provide VQAv2val (83.3) and OK-VQA (53.8) results on LVLM-eHub toolbox (Xu et al., 2023) for easy comparison. For Image Captioning, we evaluate them on COCO (Chen et al., 2015) and Flickr30k (Plummer et al., 2015) in CIDEr. We call Flamingo (Alayrac et al., 2022) FM for short. | 2306.15195#35 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 35 | Table 6: Case study on the MSMARCO dataset of the generative retrieval before and after learning to rank. The correctly predicted identifiers that belong to the target passage are colored in purple.
400 & Before LTR] [â After LTR g g Performance gap âamong top positions # Positive Passages 8 8 g Ranking position
We used generative retrieval models before and after the learning-to-rank training to retrieve the top 100 passages from the MSMARCO dataset. We then counted the number of positive passages in each rank position in the retrieval list. By analyzing the results, we found that the performance im- provement after the learning-to-rank training mainly comes from the top positions. LTRGR seems to push the positive passages to top-rank positions in the passage rank list. This vividly reflects the function of the rank loss Lrank, which brings a better passage rank order to the list.
Figure 3: The distribution of the number of retrieved positive passages is plotted against the ranking position on the MS- MARCO dataset. The labels âBefore LTRâ and âAfter LTRâ represent the generative model without and with learning-to- rank training, respectively. | 2306.15222#35 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 35 | We ï¬ne-tune the LLaMA models extended with Position Interpolation with a context window of 16384. Note the rescaling of position indices are still required during this ï¬ne-tuning step. We ï¬rst
9
7B 2048 None 76.1 78.9 55.7 42.2 7B 7B 7B 7B 8192 16384 32768 8192 Pile Pile Pile RedPajama 73.2 69.8 64.7 75.5 78.2 77.6 77.2 77.4 53.8 53.3 50.1 54.5 41.7 40.9 39.6 41.5 33B 2048 None 81.6 80.2 61.1 45.9 33B 8192 Pile 80.2 80.7 60.2 45.7 69.6 69.0 67.8 66.9 68.1 76.2 75.9
Table 5: Zero-shot performance on a subset of LLaMA Benchmarks. Models extended by Position Interpola- tion comparable performance as the original models, except for BoolQ dataset that may require models to pay close attention to word ordering in a short reference paragraph. | 2306.15595#35 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 35 | Premise Selection. For premise selection, we only use tactics in LeanDojo Benchmark that have at least one premise. The model, based on a ByT5 encoder, uses the state before a tactic as the query to retrieve 100 premises. Then, we calculate standard metrics in information retrieval: R@k (recall for the top k retrieved premises) and MRR (mean reciprocal rank).
Our first baseline is a classical BM25 retriever [95] without machine learning. Results in Table 1 show that our method outperforms BM25 significantly across the board. However, it exhibits a large performance degradation on the challenging data split (comparing novel_premises to random). This is consistent with the general observation that machine learning can be brittle in the presence of distribution shifts. In addition, we compare with two ablations: one retrieving from all premises (instead of accessible premises only) and the other without in-file negatives. They perform worse than our method, demonstrating the effectiveness of our two improvements upon DPR. | 2306.15626#35 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 36 | Datasets VQAv2val 75.33 VQAv2dev 77.36 VQAv2std 77.51 OK-VQA 47.16 - 56.3 - 50.6 - 51.8 - 44.7 - 51.0 - - 65.2 65.0 - 45.9 - 77.9 - 54.0 65.2 - - 45.0 - - - - Flickr30k COCO 73.9 117.5 67.2 84.3 61.5 79.4 67.1 84.7 - - - 122.3 - - - 114.2
Table 7: Object hallucination benchmark using POPE evaluation pipeline (Li et al., 2023c). Accuracy denotes the accuracy of predictions. Precision signiï¬es the true positive samples among the predicted positives. Recall indicates the correct identiï¬cation of all true positive samples. âYesâ represents the probability of the model outputting a positive answer. Except for Shikra-7B, the other results are obtained from Li et al., 2023c. | 2306.15195#36 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 36 | optimal weight values, we conducted a tuning experiment with different λ values, and the results are summarized in Figure 2(b). Our analysis yielded the following insights: 1) Setting the weight to 0 leads to a significant performance gap, which confirms the importance of the generation loss, as discussed in Section 4.6. 2) Varying the weight value from 500 to 200 has little effect on the performance in terms of hits@100, but the performance gradually decreases for hits@5 and hits@20 as the weight of the generation loss in- creases. This suggests that a higher weight of the generation loss can interfere with the function of the rank loss, which typically affects the top-ranking results such as hits@5 and hits@20.
Effectiveness Analysis of Learning to Rank To better illustrate how the LTRGR works and what causes the performance improvement, we performed quantitative analysis and qualitative analysis (case study).
Quantitative analysis. We plotted the distribution of posi- tive passages against their ranking positions in Figure 3(a). | 2306.15222#36 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 36 | Model Evaluation Score Model Context Window ROUGE-1 ROUGE-2 ROUGE-L CoLT5 Base (Ainslie et al., 2023) CoLT5 XL (Ainslie et al., 2023) 16K 16K 58.7 61.3 29.6 32.2 31.4 33.8 LLaMA-7B Extended 16K 60.0 28.0 29.5
# Table 6: ROUGE Score on GovReport Dataset.
format the raw document using the prompt template in Figure 4, and then concatenate the prompt with the ground-truth summary (truncate to 1000 tokens) associated with each document. We ï¬ne- tune the model using the next token prediction task with the above setup for 10 epochs. The losses from the input prompt proportion of training examples are excluded during our ï¬ne-tuning.
We use a generation temperature of 0.5 and topp = 0.95 as our inference parameter to generate a summarization of each document in the test set. The ï¬nal output is truncated at 1000 tokens. We used the ROUGE-1/ROUGE-2/ROUGE-L scores (Lin, 2004) as the evaluation metrics to evaluate the modelsâ outputs vs the ground-truth summaries. | 2306.15595#36 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 36 | Theorem Proving Experimental Setup. Then we evaluate ReProver on theorem proving. The training has two stages: First, we train the retriever and use it to retrieve 100 premises for all proof states in LeanDojo Benchmark. Second, we train the tactic generator, taking as input the concatenation of the state and retrieved premises (truncated to a length limit). During evaluation, the tactic generator is combined with best-first search to prove theorems. We evaluate the Pass@1 metric: The prover is given only one attempt and must find the proof within a wall time limit of 10 minutes. Training takes five days on a single NVIDIA A100 GPU with 80GB memory, and evaluation takes two days on eight V100 GPUs. Please see Appendix C.1 for details.
Baselines. Following prior work [16, 28], we include tidy as a baseline. It is a tactic in mathlib that tries to complete the proof using heuristics (without machine learning). We apply tidy directly
8We retrieve 100 premises, concatenate them with the state, and truncate the concatenation to a fixed length.
8 | 2306.15626#36 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 37 | Datasets Metrics Shikra Random Accuracy (â) Precision (â) Recall (â) F1-Score (â) Yes 86.90 94.40 79.27 86.19 43.26 88.57 84.09 95.13 89.27 56.57 79.67 78.24 82.20 80.17 52.53 50.37 50.19 99.13 66.64 98.77 50.10 50.05 100.00 66.71 99.90 53.97 52.07 99.60 68.39 95.63 Popular Accuracy (â) Precision (â) Recall (â) F1-Score (â) Yes 83.97 87.55 79.20 83.16 45.23 82.77 76.27 95.13 84.66 62.37 69.73 65.86 81.93 73.02 62.20 49.87 49.93 99.27 66.44 99.40 50.00 50.00 100.00 66.67 100.00 50.90 50.46 99.40 66.94 98.57 Adversarial Accuracy (â) Precision (â) Recall (â) F1-Score (â) Yes 83.10 85.60 79.60 82.49 46.50 72.10 65.13 95.13 77.32 73.03 65.17 61.19 | 2306.15195#37 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 37 | Quantitative analysis. We plotted the distribution of posi- tive passages against their ranking positions in Figure 3(a).
Case Study. To qualitatively illustrate the efficacy of the LTRGR framework, we analyzed the prediction results on MSMARCO in Table 6. It is observed that the number of the correct predicted identifiers gets increased after the learning- to-rank training phase. Besides, for the same predicted iden- tifier, such as âwhat is prime rate in Canadaâ in the case, its corresponding score also gets augmented after the learning- to-rank training. This clearly illustrates the effectiveness of the proposed learning-to-rank framework in generative re- trieval, which enhances the autoregressive model to predict more correct identifiers with bigger corresponding scores. | 2306.15222#37 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 37 | In Table 6 we report our evaluation results. We have also included results from two baselines in existing SCROLLS Leaderboard (Shaham et al., 2022; Ainslie et al., 2023). In general, we have obtained competitive R1 score among other models with minimal tuning of hyper-parameters. This result suggests our models with 16384 context window can effectively handle the long document summarization task.
Read the following article and then summarize it. # .... Document goes here Now summarize the above article. Summary:
Figure 4: Input format for long doc summarization.
# 4 RELATED WORK
Retrieval-augmented LLM. One line of work extends LLMs by augmenting it with retrieval mod- ules which fetch related documents and include the retrieval results into the input context of an LLM (Karpukhin et al., 2020; Guu et al., 2020; Izacard et al., 2022; Jiang et al., 2022; Khattab et al., 2021; Santhanam et al., 2022). Our work is complementary to these works as our extended context win- dow allows more documents being included in the input. In addition, with an unmodiï¬ed attention mechanism and model architecture, our method may be more versatile as it can natively handle tasks beyond retrieval oriented ones, such as long document summarization, few-shots learning, etc.
10 | 2306.15595#37 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 37 | 8We retrieve 100 premises, concatenate them with the state, and truncate the concatenation to a fixed length.
8
Table 1: Premise selection testing performance. For each method, we train and evaluate two models independently using different data splits (random and novel_premises; see Sec. 4). R@k is the recall for the top k retrieved premises, and MRR is the mean reciprocal rank metric (higher is better). Our retriever outperforms BM25 and ablations. Results for Lean 4 are in Appendix D.
Method random novel_premises R@1 R@10 MRR R@1 R@10 MRR BM25 w/ all premises Ours w/ all premises w/o in-file negatives 6.7 1.9 13.5 11.7 10.8 17.2 11.9 38.4 36.2 33.1 0.15 0.08 0.31 0.27 0.25 5.9 2.1 9.1 7.1 7.9 15.5 12.4 27.6 23.1 25.7 0.14 0.08 0.24 0.20 0.22 | 2306.15626#37 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15222 | 38 | Conclusion In this study, we introduce LTRGR, a novel framework that enhances current generative systems by enabling them to learn to rank passages. LTRGR requires only an additional training step via a passage rank loss and does not impose any additional burden on the inference stage. Importantly, LTRGR bridges the generative retrieval paradigm and the classical learning-to-rank paradigm, providing ample oppor- tunities for further research in this field. Our experiments demonstrate that LTRGR outperforms other generative re- trieval methods on three commonly used datasets. Moving forward, we anticipate that further research that deeply inte- grates these two paradigms will continue to advance genera- tive retrieval in this direction.
Acknowledgments The work described in this paper was supported by Re- search Grants Council of Hong Kong (PolyU/5210919, PolyU/15207821, and PolyU/15207122), National Natural Science Foundation of China (62076212) and PolyU internal grants (ZVQ0). | 2306.15222#38 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 38 | 10
Recurrent Transformers and Memory Transformers. Several works add memory capabilities to Transformers through recurrence, which increase the modelsâ capability of handling very long sequences (Bulatov et al., 2022; Wu et al., 2020; Dai et al., 2019; Wu et al., 2022; Martins et al., 2021; Mu et al., 2023). One limitation of these works is that they only allow attending to a lossy compressed version of past inputs. Mu et al. (2023) suggested that this may prevent models from remembering speciï¬c details in the past inputs. In contrast, our work allows attending to all previous tokens, preserving all details without compression, albeit with higher inference costs. Mohtashami & Jaggi (2023) proposed landmark attention which allows full random access to any chunk of the input through introducing landmark tokens. Our work allows full access of the entire input through unmodiï¬ed attention, which may be useful for tasks such as summarization. | 2306.15595#38 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 38 | to the original theorem and see if it can succeed within the wall time limit. Another baseline uses GPT-4 as the tactic generator. Given a state, it queries GPT-4 to generate 35 tactics in zero-shot. After removing invalid ones, the remaining tactics are combined with best-first search to find proofs. Data contamination is possible: Many proofs had been publicly available on GitHub before GPT-4âs data cutoff date (September 2021). See Appendix C.2 for details.
Unfortunately, it is not feasible to compare with existing LLM-based provers in Lean [16, 17, 19]. None of them are open-source or can be reproduced with reasonable effort. Furthermore, we cannot compare directly with the numbers reported in their papers, due to differences in data, infrastructure, and training procedures (details in Appendix C.3). Many difficulties are due to the private nature of existing methods. By releasing our code and models, we hope to create accessible baselines for future work to build upon.
Table 2: Theorem proving Pass@1 (%) on the testing data of LeanDojo Benchmark. Our ReProver model outperforms tidy, GPT-4, and a baseline that generates tactics directly without retrieval. Results for Lean 4 are in Appendix D.
Method tidy GPT-4 ReProver (ours) w/o retrieval random 23.8 29.0 51.2 47.6 | 2306.15626#38 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 39 | LookTwice-QA of (Mani et al., 2020) and Vi- sual7W (PointQA Setting) of (Zhu et al., 2016). LookTwice-QA asks models to answer questions about the region speciï¬ed by the user, either by center point or box, with the distinction that these questions necessitate comprehending the user- designated area ï¬rst, and then observing the en- tire image to answer. For instance, âHow many of these [Pronoun/Superclass/Class] <obj>?â, where <obj> denotes the coordinates of input point or box and [Pronoun/Superclass/Class] represents lan- guage instructions with different clarity levels (e.g., [â
/fruits/apples]). Visual7W also provides a set- ting for point QA, where models are given a question and four box options, and should choose one as the answer.â Our Shikra achieves the SOTA per- formance in all these settings. | 2306.15195#39 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 39 | References Bevilacqua, M.; Ottaviano, G.; Lewis, P.; Yih, W.-t.; Riedel, S.; and Petroni, F. 2022. Autoregressive search engines: Generating substrings as document identifiers. arXiv preprint arXiv:2204.10628. Burges, C.; Shaked, T.; Renshaw, E.; Lazier, A.; Deeds, M.; Hamilton, N.; and Hullender, G. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, 89â96. Cao, Z.; Qin, T.; Liu, T.-Y.; Tsai, M.-F.; and Li, H. 2007. Learning to rank: from pairwise approach to listwise ap- proach. In Proceedings of the 24th international conference on Machine learning, 129â136. Chang, W.-C.; Felix, X. Y.; Chang, Y.-W.; Yang, Y.; and Kumar, S. 2019. Pre-training Tasks for Embedding-based Large-scale Retrieval. In International Conference on Learn- ing Representations. Chen, D.; Fisch, A.; Weston, J.; and Bordes, A. 2017. Read- ing Wikipedia to Answer Open-Domain | 2306.15222#39 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 39 | Approximated Multi-head Attention. There is a large body of research that focuses on decreasing the memory and computational complexity of the multi-head attention (MHA) mechanism through approximation or sparsiï¬cation (Child et al., 2019; Zaheer et al., 2020; Beltagy et al., 2020; Wang et al., 2020; Choromanski et al., 2021; Kitaev et al., 2020; Ren et al., 2021). Although not the focus of this work, as these methods are not used in LLaMA (Touvron et al., 2023), we note that our method is compatible with most of them since our changes are restricted to position encodings, and not attention mechanisms. | 2306.15595#39 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 39 | Method tidy GPT-4 ReProver (ours) w/o retrieval random 23.8 29.0 51.2 47.6
Results. Table 2 shows the results on the testing data of LeanDojo Benchmark. ReProver outper- forms all baselines on two different data splits, demonstrating the effectiveness of retrieval-augmented theorem proving. GPT-4 performs substantially worse than our method, even though it may have seen the ground truth proofs due to data contamination. The task cannot be solved out of the box by state-of-the-art LLMs, calling for algorithmic innovations to make further progress.
Testing theorems in novel_premises are indeed much more challenging. All methods in Table 2 perform substantially worse on novel_premises than the random split. We argue that performance on challenging splits is more indicative of the proverâs capability and should be emphasized in the future development of theorem proving. | 2306.15626#39 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 40 | Additionally, we assess our model on conven- tional VL tasks in Table 6, such as VQA and Image Captioning, which do not necessitate coordinates in their input or output. The experimental results show that we achieved promising results on most datasets. We also evaluated the performance of our method in POPE evalution pipeline (Li et al., 2023c), and the results are recorded in Table 7. Our method has achieved results comparable to In- strutBLIP(Dai et al., 2023) and far surpasses recent popular MLLMs. Itâs worth noting that these task
conï¬gurations are just some subsets of Referen- tial Dialogue. We hope readers can appreciate the more intriguing capabilities of Shikra in Figure 2 and Appendix C.
# 7 Limitations
Shikra only supports English and is not user- friendly for non-English speakers. Making Shikra multilingual in the future is valuable. Shikra is unsuitable for dense object detection and segmen- tation tasks. Exploring improved coordinate repre- sentations for these tasks is also interesting. Shikra, like most LLMs, may produce harmful and coun- terfactual responses.
# 8 Conclusion | 2306.15195#40 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 40 | on Learn- ing Representations. Chen, D.; Fisch, A.; Weston, J.; and Bordes, A. 2017. Read- ing Wikipedia to Answer Open-Domain Questions. In Pro- ceedings of the Annual Meeting of the Association for Com- putational Linguistics, 1870â1879. Cossock, D.; and Zhang, T. 2006. Subset ranking using regression. In Learning Theory: 19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006. Proceedings 19, 605â619. Springer. Crammer, K.; and Singer, Y. 2001. Pranking with ranking. Advances in neural information processing systems, 14. De Cao, N.; Izacard, G.; Riedel, S.; and Petroni, F. 2020. Autoregressive Entity Retrieval. In International Conference on Learning Representations. Ferragina, P.; and Manzini, G. 2000. Opportunistic data structures with applications. In Proceedings 41st Annual Symposium on Foundations of Computer Science, 390â398. Freund, Y.; Iyer, R.; Schapire, R. E.; and Singer, Y. 2003. An efficient boosting algorithm for combining | 2306.15222#40 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 40 | Length Extrapolation. A recent line of research aims to train Transformers models on short se- quences and inference on longer (Press et al., 2022; Sun et al., 2022; Haviv et al., 2022). However, these methods have not been applied in some of the largest language models such as LLaMA (Tou- vron et al., 2023), or OPT (Zhang et al., 2022). This has prevented them from enabling length extrapolation of many pre-existing pre-trained language models. Our work focuses on extending existing LLMs, which can save substantial pre-training costs. In addition, our method preserves the quality of the original models, even for small context window tasks, since it does not deviate far from existing deï¬nitions of position encoding or attention mechanisms. | 2306.15595#40 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 40 | Evaluation on MiniF2F and ProofNet. We run ReProver to prove theorems in MiniF2F [28] and ProofNet [29]. These two datasets are for testing only and do not have training theorems, which makes them challenging since the distribution of theorems is quite different from mathlib used to train ReProver. MiniF2F focuses on math olympiads, and ProofNet focuses on exercises in undergraduate math textbooks. On MiniF2Fâs test set in Lean, ReProver achieves a Pass@1 of 26.5%, which is competitive with state-of-the-art methods without RL (25.9% in Polu et al. [19]). On ProofNet, our Pass@1 is 13.8%, which is the first reported theorem proving result on this dataset. Further, many theorems do not have ground truth proofs in Lean. Our prover discovers 33 proofs in MiniF2F and 39 proofs in ProofNet that currently do not have Lean proofs. Please see Appendix C.4 for details, examples, and caveats.
9
# 7 Conclusion | 2306.15626#40 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 41 | # 8 Conclusion
Our study unveiled the critical gap in MLLMsâ ability to understand and engage in referential dia- logue, an integral aspect of human communication. To address this, we introduced Shikra, a uniï¬ed, straightforward model designed to comprehend and output spatial coordinates in natural language. Our approach does not necessitate extra vocabularies, position encoders, or external plug-ins, preserving the modelâs simplicity. It was proved that Shikra performs notably well on a variety of conventional vision-language tasks, while offering swathes of exciting applications such as aiding AI assistants in Mixed Reality headsets or facilitating precise communication in online shopping scenery.
# References
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in Neural In- formation Processing Systems, 35:23716â23736. | 2306.15195#41 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 41 | 390â398. Freund, Y.; Iyer, R.; Schapire, R. E.; and Singer, Y. 2003. An efficient boosting algorithm for combining preferences. Journal of machine learning research, 4(Nov): 933â969. Joshi, M.; Choi, E.; Weld, D. S.; and Zettlemoyer, L. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1601â1611. Karpukhin, V.; Oguz, B.; Min, S.; Lewis, P.; Wu, L.; Edunov, S.; Chen, D.; and Yih, W.-t. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the International Conference on Empirical Methods in Natural Language Processing, 6769â6781. ACL. Kwiatkowski, T.; Palomaki, J.; Redfield, O.; Collins, M.; Parikh, A.; Alberti, C.; Epstein, D.; Polosukhin, I.; Devlin, J.; Lee, K.; et al. 2019. Natural Questions: A | 2306.15222#41 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 41 | Interpolation. The most related technique to ours is proposed by Dosovitskiy et al. (2021) in their work on Vision Transformers, where the authors proposed to linearly interpolate learnt position em- beddings to support higher resolution, which translates to an increased number of input embeddings, in the ï¬ne-tuning stage. The interpolated position embedding weights are used as initialization in the ï¬ne-tuning process for the newly added positions. Our work differs from their work in several ways (1) Instead of interpolating position embeddings, our method interpolates position indices, which is more suitable for RoPE like position encodings and may require less training since no trainable parameters are added. (2) We report successful results of extending the context window to 32 times while Dosovitskiy et al. (2021) explored up to 4 times. Our results extend theirs in exploring the (3) We evaluated and conï¬rmed the upper limit of context window extension via interpolation. effectiveness of Position Interpolation for extending context windows for language models. | 2306.15595#41 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 41 | 9
# 7 Conclusion
We have introduced LeanDojo: an open-source playground for learning-based theorem proving in Lean, consisting of toolkits, models, and benchmarks. It extracts data from Lean and enables the model to interact with Lean programmatically. We have developed ReProver, the first retrieval- augmented LLM for theorem proving. Limitations and future work are discussed in Appendix F.
We have released our code, data, models, and documentation to facilitate future research:
⢠LeanDojoâs codebase for data extraction and interaction with Lean: https://github.
# com/lean-dojo/LeanDojo
LeanDojoâs documentation: https://leandojo.readthedocs.io ⢠Datasets: (1) LeanDojo Benchmark: https://doi.org/10.5281/zenodo.8016385 with DOI 10.5281/zenodo.8016385. (2) LeanDojo Benchmark 4 (Appendix D): https: //doi.org/10.5281/zenodo.8040109 with DOI 10.5281/zenodo.8040109. | 2306.15626#41 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 42 | Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In Proceedings of the IEEE international conference on computer vision, pages 2425â2433.
Anas Awadalla, Irena Gao, Joshua Gardner, Jack Hes- sel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. 2023. Open- ï¬amingo.
Lior Bracha, Eitan Shaar, Aviv Shamsian, Ethan Fe- taya, and Gal Chechik. 2023. Disclip: Openvocabulary referring expression generation. arXiv preprint arXiv:2305.19108. | 2306.15195#42 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 42 | We believe our results, in conjunction with (Dosovitskiy et al., 2021), provide empirical evidence on Transformerâs remarkable ability of handling signiï¬cantly longer sequences beyond training. Further, we conjecture that a method similar to theirs is directly applicable in LLMs with learnable position embeddings such as OPT (Zhang et al., 2022) and we plan to investigate this in the future.
# 5 CONCLUSIONS
Position Interpolation can effectively extend LLaMA modelsâ context window to be signiï¬cantly larger, using minimal ï¬ne-tuning. The extended models are fully capable to perform a variety of tasks on the extended context windows, and preserve its original ability relatively well for tasks within the original extended models, making them good choices of generic language models for both long and short input prompts. Further, models extended by Position Interpolation can reuse most pre-existing infrastructure and optimization, making this method attractive in many practical applications. We believe that Position Interpolation is a general method that could be apply to other types of position encodings, which can allow extension for more types of LLMs, and we plan to investigate in such directions in the near future.
11
# ACKNOWLEDGEMENTS
We thank Mike Lewis for his input on evaluation.
# REFERENCES | 2306.15595#42 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 42 | ReProverâs code and models: https://github.com/lean-dojo/ReProver ⢠ChatGPT plugin (Appendix E): https://github.com/lean-dojo/LeanDojoChatGPT ⢠LeanDojo Website: https://leandojo.org
# Acknowledgments and Disclosure of Funding
This work is partially supported by Caltechâs Center for Autonomous Systems and Technologies. Kaiyu Yang is supported by the Computing, Data, and Society Postdoctoral Fellowship at Caltech. Alex Gu is supported by the National Science Foundation (NSF) Graduate Research Fellowship. Rahul Chalamala and Peiyang Song are supported by the Summer Undergraduate Research Fellowships (SURF) program at Caltech. Anima Anandkumar is partially supported by the Bren endowed chair. We appreciate the valuable feedback from Logan Murphy and members of the Anima AI+Science Lab on an initial version of this paper. We thank Junyan Xu for manually inspecting the proofs generated by our model on ProofNet. We also thank Jeremy Avigad and Mario Carneiro for insightful discussions on supporting Lean 4 in LeanDojo.
10
# References | 2306.15626#42 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 43 | Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with In Computer VisionâECCV 2020: transformers. 16th European Conference, Glasgow, UK, August 23â28, 2020, Proceedings, Part I 16, pages 213â229. Springer.
Feilong Chen, Minglun Han, Haozhi Zhao, Qingyang Zhang, Jing Shi, Shuang Xu, and Bo Xu. 2023. X- LLM: Bootstrapping advanced large language mod- els by treating multi-modalities as foreign languages. arXiv preprint arXiv:2305.04160.
Ting Chen, Saurabh Saxena, Lala Li, David J Fleet, and Geoffrey Hinton. 2021. Pix2seq: A language mod- eling framework for object detection. arXiv preprint arXiv:2109.10852.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. | 2306.15195#43 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 43 | Question Answering Research. Transactions of the Associa- tion for Computational Linguistics, 7: 452â466. Lee, K.; Chang, M.-W.; and Toutanova, K. 2019. Latent Retrieval for Weakly Supervised Open Domain Question Answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 6086â6096. ACL. Li, H. 2011. A short introduction to learning to rank. IEICE TRANSACTIONS on Information and Systems, 94(10): 1854â 1862. Li, P.; Wu, Q.; and Burges, C. 2007. Mcrank: Learning to rank using multiple classification and gradient boosting. Advances in neural information processing systems, 20. Li, Y.; Li, W.; and Nie, L. 2022. Dynamic Graph Reasoning for Conversational Open-Domain Question Answering. ACM Transactions on Information Systems, 40(4): 1â24. Li, Y.; Yang, N.; Wang, L.; Wei, F.; and Li, W. 2023a. Gener- ative retrieval for conversational question answering. Infor- mation Processing & Management, 60(5): 103475. Li, Y.; Yang, N.; | 2306.15222#43 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 43 | 11
# ACKNOWLEDGEMENTS
We thank Mike Lewis for his input on evaluation.
# REFERENCES
Joshua Ainslie, Tao Lei, Michiel de Jong, Santiago OntaËn´on, Siddhartha Brahma, Yury Zemlyanskiy, David Uthus, Mandy Guo, James Lee-Thorp, Yi Tay, Yun-Hsuan Sung, and Sumit Sanghai. Colt5: Faster long-range transformers with conditional computation, 2023.
Zhangir Azerbayev, Edward Ayers, and Bartosz Piotrowski. Proof-pile, 2022. URL https:// github.com/zhangir-azerbayev/proof-pile.
Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer: The long-document transformer. 2020.
Aydar Bulatov, Yuri Kuratov, and Mikhail S. Burtsev. Recurrent memory transformer. 2022.
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. 2019. | 2306.15595#43 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 43 | 10
# References
[1] Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The Lean theorem prover (system description). In International Conference on Automated Deduction (CADE), 2015. 1, 2, 22
[2] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Neural Information Processing Systems (NeurIPS), 2017. 2, 7
[3] Allen Newell and Herbert Simon. The logic theory machineâa complex information processing system. IRE Transactions on information theory, 2(3):61â79, 1956. 1
[4] Kevin Buzzard. The future of mathematics. CRNS-Imperial Lecture, 2019. 1
[5] Xavier Leroy, Sandrine Blazy, Daniel Kästner, Bernhard Schommer, Markus Pister, and Christian Ferdinand. CompCertâa formally verified optimizing compiler. In Embedded Real Time Software and Systems, 2016. 1 | 2306.15626#43 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 44 | Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. 2023. Instructblip: Towards general-purpose vision- arXiv language models with instruction tuning. preprint arXiv:2305.06500.
Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. 2023. PaLM-E: An embodied multimodal lan- guage model. arXiv preprint arXiv:2303.03378.
Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. 2023. Llama-adapter v2: Parameter-efï¬cient visual instruction model. arXiv preprint arXiv:2304.15010.
Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440â1448. | 2306.15195#44 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 44 | ative retrieval for conversational question answering. Infor- mation Processing & Management, 60(5): 103475. Li, Y.; Yang, N.; Wang, L.; Wei, F.; and Li, W. 2023b. Mul- tiview Identifiers Enhanced Generative Retrieval. arXiv preprint arXiv:2305.16675. Mao, Y.; He, P.; Liu, X.; Shen, Y.; Gao, J.; Han, J.; and Chen, W. 2021. Generation-Augmented Retrieval for Open-Domain Question Answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 4089â4100. ACL. Nguyen, T.; Rosenberg, M.; Song, X.; Gao, J.; Tiwary, S.; Majumder, R.; and Deng, L. 2016. MS MARCO: A hu- man generated machine reading comprehension dataset. In CoCo@ NIPs. Nogueira, R.; and Cho, K. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085. Pradeep, R.; Hui, K.; Gupta, J.; Lelkes, | 2306.15222#44 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 44 | Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. 2019.
Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tam´as Sarl´os, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J. Colwell, and Adrian Weller. Rethinking attention with per- In 9th International Conference on Learning Representations, ICLR 2021. OpenRe- formers. view.net, May 2021.
Together Computer. Redpajama: An open source recipe to reproduce llama training dataset, 2023. URL https://github.com/togethercomputer/RedPajama-Data.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformerxl: Attentive language models beyond a ï¬xed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2978â2988, Florence, Italy, 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1285. | 2306.15595#44 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 44 | [6] Talia Ringer, Karl Palmskog, Ilya Sergey, Milos Gligoric, Zachary Tatlock, et al. QED at large: A survey of engineering of formally verified software. Foundations and Trends® in Programming Languages, 2019. 1
[7] Bruno Barras, Samuel Boutin, Cristina Cornes, Judicaël Courant, Jean-Christophe Filliatre, Eduardo Gimenez, Hugo Herbelin, Gerard Huet, Cesar Munoz, Chetan Murthy, et al. The Coq proof assistant reference manual: Version 6.1. PhD thesis, Inria, 1997. 1
[8] Tobias Nipkow, Markus Wenzel, and Lawrence C Paulson. Isabelle/HOL: a proof assistant for higher- order logic. 2002. 1
[9] Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In Interna- tional Conference on Machine Learning (ICML), 2019. 1, 4, 35
[10] William A Howard. The formulae-as-types notion of construction. To HB Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism, 1980. 2 | 2306.15626#44 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 45 | Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440â1448.
Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. 2023. Multimodal-gpt: A vision and language model arXiv preprint for dialogue with humans. arXiv:2305.04790.
Agrim Gupta, Piotr Dollar, and Ross Girshick. 2019. Lvis: A dataset for large vocabulary instance seg- In Proceedings of the IEEE/CVF con- mentation. ference on computer vision and pattern recognition, pages 5356â5364.
Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. 2017. Modeling relationships in referential expressions with compo- In CVPR, pages 1115â sitional modular networks. 1124. | 2306.15195#45 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 45 | with BERT. arXiv preprint arXiv:1901.04085. Pradeep, R.; Hui, K.; Gupta, J.; Lelkes, A. D.; Zhuang, H.; Lin, J.; Metzler, D.; and Tran, V. Q. 2023. How Does Gener- ative Retrieval Scale to Millions of Passages? arXiv preprint arXiv:2305.11841. Qu, Y.; Ding, Y.; Liu, J.; Liu, K.; Ren, R.; Zhao, W. X.; Dong, D.; Wu, H.; and Wang, H. 2021. RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, 5835â5847. Ren, R.; Zhao, W. X.; Liu, J.; Wu, H.; Wen, J.-R.; and Wang, H. 2023. TOME: A Two-stage Approach for Model-based Retrieval. arXiv preprint arXiv:2305.11161. Shrivastava, A.; and Li, P. 2014. Asymmetric LSH | 2306.15222#45 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 45 | Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. FlashAttention: Fast and memory-efï¬cient exact attention with IO-awareness. In Advances in Neural Information Process- ing Systems, 2022.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszko- reit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recogni- tion at scale. In International Conference on Learning Representations, 2021. URL https: //openreview.net/forum?id=YicbFdNTTy.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. | 2306.15595#45 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 45 | [11] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. 2
[12] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â38, 2023. 2
[13] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. 2
[14] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393, 2020. 2, 4, 8 | 2306.15626#45 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 46 | Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. 2023. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045.
Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In CVPR, pages 2901â2910.
Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring to objects in photographs of natural scenes. In EMNLP, pages 787â798.
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. 2023. Segment anything. arXiv preprint arXiv:2304.02643. | 2306.15195#46 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 46 | Retrieval. arXiv preprint arXiv:2305.11161. Shrivastava, A.; and Li, P. 2014. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). Advances in neural information processing systems, 27. Tay, Y.; Tran, V. Q.; Dehghani, M.; Ni, J.; Bahri, D.; Mehta, H.; Qin, Z.; Hui, K.; Zhao, Z.; Gupta, J.; et al. 2022. Trans- former memory as a differentiable search index. arXiv preprint arXiv:2202.06991. Wang, L.; Yang, N.; Huang, X.; Jiao, B.; Yang, L.; Jiang, D.; Majumder, R.; and Wei, F. 2022a. Simlm: Pre-training with representation bottleneck for dense passage retrieval. arXiv preprint arXiv:2207.02578. | 2306.15222#46 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 46 | Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Realm: Retrieval- augmented language model pre-training. 2020.
Adi Haviv, Ori Ram, Oï¬r Press, Peter Izsak, and Omer Levy. Transformer language models without positional encodings still learn positional information. 2022.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, arXiv preprint and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv:2106.09685, 2021.
Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. Efï¬cient attentions for long document summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1419â1436, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/ 2021.naacl-main.112. URL https://aclanthology.org/2021.naacl-main.112.
12 | 2306.15595#46 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 46 | [15] Albert Qiaochu Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu. LISA: Language models of ISAbelle proofs. In Conference on Artificial Intelligence and Theorem Proving (AITP), 2021. 4
[16] Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward Ayers, and Stanislas Polu. Proof artifact co-training for theorem proving with language models. In International Conference on Learning Representations (ICLR), 2022. 4, 6, 7, 8, 9, 19, 20, 26, 36
[17] Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat, Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. HyperTree proof search for neural theorem proving. In Neural Information Processing Systems (NeurIPS), 2022. 2, 4, 8, 9, 26, 36
[18] Albert Qiaochu Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygó´zd´z, Piotr MiÅo´s, Yuhuai Wu, and Mateja Jamnik. Thor: Wielding hammers to integrate language models and automated theorem provers. In Neural Information Processing Systems (NeurIPS), 2022. 4, 7, 26 | 2306.15626#46 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 47 | Jing Yu Koh, Ruslan Salakhutdinov, and Daniel Fried. 2023. Grounding language models to im- arXiv preprint ages for multimodal generation. arXiv:2301.13823.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual Genome: Connecting language and vi- sion using crowdsourced dense image annotations. IJCV, 123:32â73.
Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. 2023a. Otter: A multi-modal model with in-context instruction tun- ing. arXiv preprint arXiv:2305.03726.
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023b. Blip-2: Bootstrapping language-image pre- training with frozen image encoders and large lan- guage models. arXiv preprint arXiv:2301.12597. | 2306.15195#47 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 47 | Wang, Y.; Hou, Y.; Wang, H.; Miao, Z.; Wu, S.; Chen, Q.; Xia, Y.; Chi, C.; Zhao, G.; Liu, Z.; et al. 2022b. A neural corpus indexer for document retrieval. Advances in Neural Information Processing Systems, 35: 25600â25614. Xia, F.; Liu, T.-Y.; Wang, J.; Zhang, W.; and Li, H. 2008. List- wise approach to learning to rank: theory and algorithm. In Proceedings of the 25th international conference on Machine learning, 1192â1199. Xiong, L.; Xiong, C.; Li, Y.; Tang, K.-F.; Liu, J.; Bennett, P. N.; Ahmed, J.; and Overwijk, A. 2020. Approximate Nearest Neighbor Negative Contrastive Learning for Dense In International Conference on Learning Text Retrieval. Representations. | 2306.15222#47 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 47 | 12
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. Atlas: Few-shot learning with retrieval augmented language models. 2022.
Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, and Graham Neu- big. Retrieval as attention: End-to-end learning of retrieval and reading within a single trans- former. 2022.
kaiokendev. Things i ´m learning while training superhot. https://kaiokendev.github. io/til#extending-context-to-8k, 2023.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6769â6781. Association for Computational Linguistics, 2020. doi: 10.18653/ v1/2020.emnlp-main.550. | 2306.15595#47 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 47 | [19] Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya Sutskever. Formal mathematics statement curriculum learning. In International Conference on Learning Representa- tions (ICLR), 2023. 2, 3, 4, 7, 8, 9, 18, 19, 20, 26, 36
11
[20] Emily First, Markus N Rabe, Talia Ringer, and Yuriy Brun. Baldur: Whole-proof generation and repair with large language models. arXiv preprint arXiv:2303.04910, 2023. 8, 35
[21] Haiming Wang, Ye Yuan, Zhengying Liu, Jianhao Shen, Yichun Yin, Jing Xiong, Enze Xie, Han Shi, Yujun Li, Lin Li, et al. DT-Solver: Automated theorem proving with dynamic-tree sampling guided by proof-level value function. In Annual Meeting of the Association for Computational Linguistics (ACL), 2023. 2, 4
[22] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv preprint arXiv:1606.01540, 2016. 2, 7 | 2306.15626#47 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 48 | Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023c. Eval- uating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355.
Zheng Lin, Zhao Zhang, Lin-Zhuo Chen, Ming-Ming Cheng, and Shao-Ping Lu. 2020. Interactive image segmentation with ï¬rst click attention. In Proceed- ings of the IEEE/CVF conference on computer vi- sion and pattern recognition, pages 13339â13348.
Zheng Lin, Zhao Zhang, Ling-Hao Han, and Shao-Ping Lu. 2022. Multi-mode interactive image segmenta- tion. In Proceedings of the 30th ACM International Conference on Multimedia, pages 905â914.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023a. Visual instruction tuning. arXiv preprint arXiv:2304.08485.
Jingyu Liu, Liang Wang, and Ming-Hsuan Yang. 2017. Referring expression generation and comprehension via attributes. In Proceedings of the IEEE Interna- tional Conference on Computer Vision, pages 4856â 4864. | 2306.15195#48 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 48 | Omar Khattab, Christopher Potts, and Matei Zaharia. Relevance-guided supervision for openqa with colbert. Transactions of the Association for Computational Linguistics, 9:929â944, 2021. doi: 10.1162/tacl a 00405.
Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efï¬cient transformer. In 8th International Conference on Learning Representations, ICLR 2020. OpenReview.net, April 2020.
SentencePiece: A simple and language independent sub- In Proceedings of the 2018 Con- word tokenizer and detokenizer for neural text processing. ference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66â71, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-2012. URL https://aclanthology.org/D18-2012.
Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pp. 74â81, Barcelona, Spain, July 2004. Association for Computational Linguis- tics. URL https://aclanthology.org/W04-1013. | 2306.15595#48 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 48 | [23] Josef Urban. MPTPâmotivation, implementation, first experiments. Journal of Automated Reasoning, 33:319â339, 2004. 3, 4
[24] Geoffrey Irving, Christian Szegedy, Alexander A Alemi, Niklas Eén, François Chollet, and Josef Urban. DeepMathâdeep sequence models for premise selection. In Neural Information Processing Systems (NeurIPS), 2016. 3, 4
[25] The mathlib Community. The Lean mathematical library. In Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs, CPP 2020, pages 367â381, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450370974. doi: 10.1145/3372885. 3373824. URL https://doi.org/10.1145/3372885.3373824. 3
[26] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020. 3, 7, 33 | 2306.15626#48 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 49 | Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. 2023b. Grounding dino: Marrying dino with grounded pre-training arXiv preprint for open-set object detection. arXiv:2303.05499.
SGDR: stochastic gradient descent with warm restarts. In 5th International Conference on Learning Repre- sentations, ICLR 2017, Toulon, France, April 24- 26, 2017, Conference Track Proceedings. OpenRe- view.net.
Ilya Loshchilov and Frank Hutter. 2019. Decou- In 7th Inter- pled weight decay regularization. national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, and Aniruddha Kembhavi. 2022. Uniï¬ed- io: A uniï¬ed model for vision, language, and multi- modal tasks. arXiv preprint arXiv:2206.08916. | 2306.15195#49 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 49 | Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Confer- ence on Learning Representations, 2019. URL https://openreview.net/forum?id= Bkg6RiCqY7.
Pedro Henrique Martins, Zita Marinho, and Andr´e F. T. Martins. â-former: Inï¬nite memory trans- former. 2021.
Amirkeivan Mohtashami and Martin Jaggi. Landmark attention: Random-access inï¬nite context length for transformers. arXiv preprint arXiv:2305.16300, 2023.
Jesse Mu, Xiang Lisa Li, and Noah Goodman. Learning to compress prompts with gist tokens. 2023. | 2306.15595#49 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 49 | [27] OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 3, 24, 32
[28] Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. MiniF2F: a cross-system benchmark for formal olympiad-level mathematics. In International Conference on Learning Representations (ICLR), 2022. 3, 4, 7, 8, 9, 26, 27
[29] Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W Ayers, Dragomir Radev, and Jeremy Avigad. ProofNet: Autoformalizing and formally proving undergraduate-level mathematics. arXiv preprint arXiv:2302.12433, 2023. 3, 4, 9, 26, 28, 29
[30] Alan JA Robinson and Andrei Voronkov. Handbook of automated reasoning, volume 1. 2001. 4
[31] Laura Kovács and Andrei Voronkov. First-order theorem proving and vampire. In International Conference on Computer Aided Verification (CAV), 2013. 4
[32] Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided proof search. arXiv preprint arXiv:1701.06972, 2017. 4 | 2306.15626#49 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 50 | Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vi- sion and language representation learning. In CVPR, pages 10437â10446.
Arjun Mani, Nobline Yoo, Will Hinthorn, and Olga Incorporat- Russakovsky. 2020. ing pointing into visual question answering. arXiv preprint arXiv:2011.13681.
Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous ob- ject descriptions. In CVPR, pages 11â20.
Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external In Proceedings of the IEEE/cvf con- knowledge. ference on computer vision and pattern recognition, pages 3195â3204.
Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, and Ping Luo. 2023. Embodiedgpt: Vision- language pre-training via embodied chain of thought. arXiv preprint arXiv:2305.15021. | 2306.15195#50 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15595 | 50 | Jesse Mu, Xiang Lisa Li, and Noah Goodman. Learning to compress prompts with gist tokens. 2023.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K¨opf, Ed- ward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Curran Associates Inc., Red Hook, NY, USA, 2019.
Oï¬r Press, Noah Smith, and Mike Lewis. Train short, test long: Attention with linear biases enables input length extrapolation. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=R8sQPpGCv0.
Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lilli- In International Confer- crap. Compressive transformers for long-range sequence modelling. ence on Learning Representations, 2020. URL https://openreview.net/forum?id= SylKikSYDH.
13 | 2306.15595#50 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 50 | [33] James P Bridge, Sean B Holden, and Lawrence C Paulson. Machine learning for first-order theorem proving: learning to select a good heuristic. Journal of Automated Reasoning, 53:141â172, 2014. 4
[34] Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. TacticToe: learning to prove with tactics. Journal of Automated Reasoning, 65:257â286, 2021. 4
[35] Aditya Paliwal, Sarah Loos, Markus Rabe, Kshitij Bansal, and Christian Szegedy. Graph representations for higher-order logic and theorem proving. In AAAI Conference on Artificial Intelligence, 2020. 4
[36] Kshitij Bansal, Christian Szegedy, Markus N Rabe, Sarah M Loos, and Viktor Toman. Learning to reason in large theories without imitation. arXiv preprint arXiv:1905.10501, 2019. 4
[37] Minchao Wu, Michael Norrish, Christian Walder, and Amir Dezfouli. TacticZero: Learning to prove theorems from scratch with deep reinforcement learning. In Neural Information Processing Systems (NeurIPS), 2021. 4 | 2306.15626#50 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 51 | OpenAI. 2023. Gpt-4 technical report.
Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image- In Proceedings of the IEEE to-sentence models. international conference on computer vision, pages 2641â2649.
Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Pandagpt: One Wang, and Deng Cai. 2023. model to instruction-follow them all. arXiv preprint arXiv:2305.16355.
Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. 2020. Fourier features let networks learn high frequency functions in low dimensional do- mains. Advances in Neural Information Processing Systems, 33:7537â7547.
Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. 2019. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF interna- tional conference on computer vision, pages 9627â 9636. | 2306.15195#51 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.