doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.08319 | 33 | # 5 Related Work
# 5.1 Large Language Models
Pre-trained language models based on the Trans- former architecture (Vaswani et al., 2017) have demonstrated impressive capabilities when fine- tuned for various downstream tasks since the ad- vent of BERT (Devlin et al., 2019). Depending on their architecture, pre-trained Transformers can be classified into three categories: encoder-only mod- els (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020), encoderâdecoder models (Raffel et al.,
2020; Lewis et al., 2020a), and decoder-only mod- els (Radford et al., 2018). Decoder-only models like GPT/GPT-2 have been lauded for their simplic- ity in terms of model architecture and pre-training procedures (Radford et al., 2018, 2019). | 2310.08319#33 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 34 | Recent research has shown that scaling up LLMs by pre-training larger decoder-only models using larger and higher quality corpora can significantly enhance model capabilities for general-purpose NLP tasks such as question answering and code generation (Wei et al., 2022; Chen et al., 2021). This is achieved by fine-tuning the pre-trained LLMs with instruction-following data using rein- forcement learning with human feedback. Instruct- GPT (Ouyang et al., 2022) and GPT-4 (OpenAI, 2023) are two popular representatives in this class of models. Among the many implementations of open-source large language models, LLaMA (Tou- vron et al., 2023a,b) is among the most recent and among the top-performing on a variety of tasks.
# 5.2 Multi-Stage Text Retrieval | 2310.08319#34 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 35 | # 5.2 Multi-Stage Text Retrieval
While multi-stage retrieval pipelines date back well over a decade (Matveeva et al., 2006; Cambazoglu et al., 2010; Wang et al., 2011), they have bene- fited immensely from pre-trained language mod- els such as BERT in recent years, starting with the monoBERT reranking model (Nogueira and Cho, 2019). Nogueira et al. (2019) proposed a multi-stage retrieval pipeline that employs a BM25 retriever followed by two BERT-based reranking stages. This design demonstrates the effective- ness of pre-trained language models in reranking tasks. RankLLaMA follows the same basic de- sign as monoBERT. The dense passage retriever (DPR) further proposed to fine-tune BERT to re- place the BM25 retriever with a dense retrieval model in a bi-encoder design (Karpukhin et al., 2020). DPR encodes text into low-dimensional dense vector representations and treats retrieval as a nearest-neighbor search task. RepLLaMA fol- lows the same bi-encoder design. | 2310.08319#35 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 36 | Several solutions have been introduced to en- hance the effectiveness of retrievers and rerankers in a multi-stage pipeline. On the retriever side, works such as ANCE (Xiong et al., 2021), Rocket- QA (Qu et al., 2021), CoCondenser (Gao and Callan, 2022b), RetroMAE (Xiao et al., 2022), and SimLM (Wang et al., 2023), have shown that aug- menting the training data with hard negative mining or continuous retrieval-oriented pre-training can
improve the effectiveness of dense retrievers. On the reranker side, monoT5 (Nogueira et al., 2020) and monoELECTRA (Pradeep et al., 2022) demon- strated that initializing the reranker with a custom pre-trained model can enhance effectiveness. Gao et al., 2021 proposed using a contrastive loss for reranker training to replace the default pairwise loss. Zhuang et al. (2023) studied the use of T5 as a reranker, analyzing the influence of different model architectures and loss functions. However, directly fine-tuning modern billion-parameter-scale large language models for multi-stage retrieval has not been explored to date. | 2310.08319#36 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 37 | Recently, LLMs have demonstrated impressive effectiveness when prompted to perform few-shot or zero-shot text generation. As mentioned in the introduction, researchers have cast reranking as text generation. These models can be leveraged to directly generate a reordered list of candidates, e.g., LRL (Ma et al., 2023), RankGPT (Sun et al., 2023), RankVicuna (Pradeep et al., 2023). Alternatively, they can compare passages in a pairwise manner, e.g., PRP (Qin et al., 2023). Although prompt- based methods have shown good zero-shot effec- tiveness, they require multiple decoding passes, thus making them slow and non-parallelizable. Fur- thermore, reranking with prompts makes it difficult to exploit available human judgments such as MS MARCO (Bajaj et al., 2016) to further improve effectiveness. Finally, these approaches do not al- low for joint rerankerâretriever optimization. In contrast, we address all these issues. | 2310.08319#37 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 38 | Our work is most similar to GPT-XXL (Ni et al., 2022) and SGPT (Muennighoff, 2022), which also used billion-parameter-scale models as backbones of dense retrievers, achieving better zero-shot effec- tiveness than smaller models. However, LLaMA has demonstrated even better effectiveness on nat- ural language generation tasks, suggesting that it might serve as a better backbone and warranting further exploration. The cpt-text model (Neelakan- tan et al., 2022), initialized with the 175-billion- parameter GPT-3 model, also shows strong zero- shot effectiveness. However, cpt-text is not an open- source model. Additionally, none of the models referenced above are fully optimized for a multi- stage retrieval pipeline. Our RepLLaMA and Rank- LLaMA models are fully open-source and opti- mized for multi-stage retrieval, achieving state-of- the-art effectiveness on both retrieval and reranking, for both in-domain and out-of-domain evaluations.
# 6 Conclusion | 2310.08319#38 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 39 | # 6 Conclusion
The successful application of large language mod- els in generative tasks has sparked interest in their potential to enhance retrieval. In this study, we demonstrate that it is possible to fine-tune a large model to act as a dense retriever (RepLLaMA) and a pointwise reranker (RankLLaMA), thereby es- tablishing an effective, state-of-the-art multi-stage retrieval system that outperforms smaller models built on the same basic design. Moreover, our ap- proach offers greater optimization and efficient in- ference potential than recent methods that prompt large language models for text reranking in a gener- ative manner. This work underscores the potential of leveraging LLMs for retrieval tasks in the future, which we continue to explore.
# Acknowledgments
This research was supported in part by the Nat- ural Sciences and Engineering Research Council (NSERC) of Canada.
# References
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, An- drew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv:1611.09268. | 2310.08319#39 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 40 | B. Barla Cambazoglu, Hugo Zaragoza, Olivier Chapelle, Jiang Chen, Ciya Liao, Zhaohui Zheng, and Jon De- genhardt. 2010. Early exit optimizations for additive machine learned ranking systems. In Proceedings of the Third ACM International Conference on Web Search and Data Mining, WSDM â10, page 411â420, New York, NY, USA. Association for Computing Machinery.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1870â1879, Vancouver, Canada. Association for Computational Linguistics.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas- try, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, | 2310.08319#40 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 41 | Philippe Tillet, Felipe Petroski Such, Dave Cum- mings, Matthias Plappert, Fotios Chantzis, Eliza- beth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welin- der, Bob McGrew, Dario Amodei, Sam McCan- dlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. arXiv:2107.03374. | 2310.08319#41 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 42 | Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440â 8451, Online. Association for Computational Lin- guistics.
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv:2102.07662.
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820.
Zhuyun Dai and Jamie Callan. 2019. Deeper text under- standing for IR with contextual neural language mod- eling. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIRâ19, page 985â988, New York, NY, USA. Association for Computing Machinery. | 2310.08319#42 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 43 | FlashAttention-2: Faster atten- tion with better parallelism and work partitioning. arXiv:2307.08691.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Luyu Gao and Jamie Callan. 2022a. Long document re-ranking with modular re-ranker. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR â22, page 2371â2376, New York, NY, USA. Association for Computing Machinery.
Luyu Gao and Jamie Callan. 2022b. Unsupervised cor- pus aware language model pre-training for dense pas- sage retrieval. In Proceedings of the 60th Annual
Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2843â2853, Dublin, Ireland. Association for Computational Lin- guistics. | 2310.08319#43 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 44 | Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2843â2853, Dublin, Ireland. Association for Computational Lin- guistics.
Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. Re- think training of BERT rerankers in multi-stage re- In Advances in Information Re- trieval pipeline. trieval: 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 â April 1, 2021, Proceedings, Part II, page 280â286, Berlin, Heidel- berg. Springer-Verlag.
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv:2305.06983. | 2310.08319#44 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 45 | Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-hermelo, Mehdi Rezagholizadeh, and Jimmy Lin. 2023. Evaluat- ing embedding APIs for information retrieval. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 5: Industry Track), pages 518â526, Toronto, Canada. Association for Computational Linguistics.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769â6781, Online. Association for Computational Linguistics. | 2310.08319#45 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 46 | Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871â7880, Online. Association for Computa- tional Linguistics.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge- intensive NLP tasks. In Advances in Neural Infor- mation Processing Systems, volume 33, pages 9459â 9474. Curran Associates, Inc.
Jimmy Lin. 2021. A proposed conceptual framework for a representational approach to information re- trieval. arXiv:2110.01529. | 2310.08319#46 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 47 | Jimmy Lin. 2021. A proposed conceptual framework for a representational approach to information re- trieval. arXiv:2110.01529.
Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense In Proceedings of the 44th Inter- representations. national ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR â21, page 2356â2362, New York, NY, USA. Association for Computing Machinery.
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, and Jasper Xian. 2023. Vector search with OpenAI em- beddings: Lucene is all you need. arXiv:2308.14963.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692. | 2310.08319#47 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 48 | Xueguang Ma, Xinyu Crystina Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-shot listwise doc- ument reranking with a large language model. arXiv:2305.02156.
Yu A. Malkov and D. A. Yashunin. 2020. Efficient and robust approximate nearest neighbor search us- ing hierarchical navigable small world graphs. IEEE Transactions on Pattern Analysis and Machine Intel- ligence, 42(4):824â836.
Irina Matveeva, Chris Burges, Timo Burkard, Andy Lau- cius, and Leon Wong. 2006. High accuracy retrieval with multiple nested ranker. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Re- trieval, SIGIR â06, page 437â444, New York, NY, USA. Association for Computing Machinery.
Niklas Muennighoff. 2022. SGPT: GPT sentence em- beddings for semantic search. arXiv:2202.08904. | 2310.08319#48 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 49 | Niklas Muennighoff. 2022. SGPT: GPT sentence em- beddings for semantic search. arXiv:2202.08904.
Arvind Neelakantan, Tao Xu, Raul Puri, Alec Rad- ford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. 2022. Text and code embeddings by con- trastive pre-training. arXiv:2201.10005.
Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022. Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9844â9855, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics. | 2310.08319#49 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 50 | Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv:1901.04085.
Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre- In Findings trained sequence-to-sequence model. of the Association for Computational Linguistics: EMNLP 2020, pages 708â718, Online. Association for Computational Linguistics.
Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with BERT. arXiv:1910.14424.
OpenAI. 2023. GPT-4 technical report. arXiv:2303.08774.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welin- der, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow in- structions with human feedback. arXiv:2203.02155. | 2310.08319#50 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 51 | Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523â2544, Online. Association for Computational Linguistics.
Ronak Pradeep, Yuqi Liu, Xinyu Zhang, Yilin Li, An- drew Yates, and Jimmy Lin. 2022. Squeezing water from a stone: A bag of tricks for further improv- ing cross-encoder effectiveness for reranking. In Advances in Information Retrieval, pages 655â670, Cham. Springer International Publishing.
Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2021. The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models. arXiv:2101.05667. | 2310.08319#51 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 52 | Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023. RankVicuna: Zero-shot listwise docu- ment reranking with open-source large language mod- els. arXiv:2309.15088.
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Ben- dersky. 2023. Large language models are effec- tive text rankers with pairwise ranking prompting. arXiv:2306.17563.
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized train- ing approach to dense passage retrieval for open- domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 5835â5847, On- line. Association for Computational Linguistics.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. | 2310.08319#52 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 53 | Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1â67.
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. REPLUG: Retrieval-augmented black-box language models. arXiv:2301.12652.
Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT good at search? Investigating large language models as re-ranking agent. arXiv:2304.09542. | 2310.08319#53 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 54 | Nandan Thakur, Nils Reimers, Andreas Rücklé, Ab- hishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
Christos Thorne, and Arpit Mittal. 2018. Christodoulopoulos, FEVER: a large-scale dataset for fact extraction In Proceedings of the 2018 and VERification. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809â819, New Orleans, Louisiana. Association for Computational Linguistics.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. LLaMA: Open and efficient foundation language models. arXiv:2302.13971. | 2310.08319#54 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 55 | Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cris- tian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hos- seini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, | 2310.08319#55 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 56 | Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, An- gela Fan, Melanie Kambadur, Sharan Narang, Aure- lien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc. | 2310.08319#56 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 57 | Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2023. SimLM: Pre-training with repre- sentation bottleneck for dense passage retrieval. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2244â2258, Toronto, Canada. Association for Computational Linguistics.
Lidan Wang, Jimmy Lin, and Donald Metzler. 2011. A cascade ranking model for efficient ranked retrieval. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR â11, page 105â114, New York, NY, USA. Association for Computing Machin- ery.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and Denny Zhou. 2022. Chain of thought prompt- ing elicits reasoning in large language models. arXiv:2201.11903. | 2310.08319#57 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 58 | Shitao Xiao, Zheng Liu, Yingxia Shao, and Zhao Cao. 2022. RetroMAE: Pre-training retrieval-oriented lan- guage models via masked auto-encoder. In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 538â548, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics.
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neigh- bor negative contrastive learning for dense text re- trieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Nan Yang, Tao Ge, Liang Wang, Binxing Jiao, Daxin Jiang, Linjun Yang, Rangan Majumder, and Furu Wei. 2023. Inference with reference: Lossless accelera- tion of large language models. arXiv:2304.04487. | 2310.08319#58 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.08319 | 59 | Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Z. Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jianyun Nie, and Ji rong Wen. 2023. A survey of large language models. arXiv:2303.18223.
Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. 2023. RankT5: Fine-tuning T5 for text ranking with ranking losses. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR â23, page 2308â2313, New York, NY, USA. Association for Computing Machinery. | 2310.08319#59 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning
the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise
reranker (RankLLaMA) for both passage retrieval and document retrieval using
the MS MARCO datasets. Our findings demonstrate that the effectiveness of large
language models indeed surpasses that of smaller models. Additionally, since
LLMs can inherently handle longer contexts, they can represent entire documents
holistically, obviating the need for traditional segmenting and pooling
strategies. Furthermore, evaluations on BEIR demonstrate that our
RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model
checkpoints from this study are available on HuggingFace. | http://arxiv.org/pdf/2310.08319 | Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin | cs.IR | null | null | cs.IR | 20231012 | 20231012 | [
{
"id": "2302.13971"
},
{
"id": "2201.11903"
},
{
"id": "2301.12652"
},
{
"id": "2107.03374"
},
{
"id": "2201.10005"
},
{
"id": "2307.08691"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2304.09542"
},
{
"id": "1611.09268"
},
{
"id": "2303.18223"
},
{
"id": "2101.05667"
},
{
"id": "2102.07662"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2304.04487"
},
{
"id": "2308.14963"
},
{
"id": "1910.14424"
},
{
"id": "1901.04085"
},
{
"id": "2110.01529"
},
{
"id": "1907.11692"
},
{
"id": "2305.06983"
},
{
"id": "2203.02155"
},
{
"id": "2202.08904"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 1 | # Abstract
Large language models (LLMs) exhibit posi- tional bias in how they use context, which espe- cially complicates listwise ranking. To address this, we propose permutation self-consistency, a form of self-consistency over ranking list outputs of black-box LLMs. Our key idea is to marginalize out different list orders in the prompt to produce an order-independent rank- ing with less positional bias. First, given some input prompt, we repeatedly shuffle the list in the prompt and pass it through the LLM while holding the instructions the same. Next, we aggregate the resulting sample of rank- ings by computing the central ranking closest in distance to all of them, marginalizing out prompt order biases in the process. Theoreti- cally, we prove the robustness of our method, showing convergence to the true ranking in the presence of random perturbations. Empir- ically, on five list-ranking datasets in sorting and passage reranking, our approach improves scores from conventional inference by up to 7â18% for GPT-3.5 and 8â16% for LLaMA v2 (70B), surpassing the previous state of the art in passage reranking. Our code is at https: //github.com/castorini/perm-sc.
1
# 1 Introduction
a iC Order these items: fen cree EENEN ECT | 2310.07712#1 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 2 | 1
# 1 Introduction
a iC Order these items: fen cree EENEN ECT
Figure 1: The conventional decoding process for list- wise ranking with input prompt a , language model c , and output ranking d . The grey item b is âlost in the middleâ by the LLM, resulting in its misranking e .
TASH) ¢ 5 OSHS) Peeas) MEE 7) (3) 2) 4) 6)|
Figure 2: Our permutation self-consistency process. With the instruction fixed, we shuffle the input list for prompts a , producing outputs with different mistakes. We then aggregate b these output rankings into one c .
interfere with the model. Liu et al. (2023) demon- strate that LLMs tend to get âlost in the middleâ of a long context and use the middle portion poorly, which suggests that the middle passage (2) in the example may get misranked (e.g., 3, 1, 2). Wang et al. (2023a) find prompt order to affect quality, with some orders outperforming others; if items 1 and 3 were swapped in the prompt, the LLM would perhaps generate the mistaken ranking (2, 1, 3). | 2310.07712#2 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 3 | Large language models (LLMs) respond cogently to free-form textual prompts and represent the state of the art across many tasks (Zhao et al., 2023). Their quality, however, varies with nuisance posi- tional factors such as prompt order and input length. As a descriptive example, consider this prompt:
Arrange the following passages in decreasing relevance to the query, âwhat are shrews?â (1) Cats hunt small mammals, such as shrews ... (2) Shrews are mole-like mammals, widely ... (3) Shrews use their noses to find prey and ... The correct output order is (2, 3, 1), from most rel- evant to least, but several positional biases may | 2310.07712#3 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 4 | In this paper, we mitigate positional biases for listwise-ranking LLMs. We propose permutation self-consistency, a novel decoding strategy for im- proving the quality, consistency, and prompt-order invariance of black-box LLMs. First, we construct prompts with randomly permuted input lists, from which the LLM generates a set of output rankings. Then, we aggregate these outputs into the central ranking that minimizes the Kendall tau distance to all of them, marginalizing out prompt order as a factor; see Figures 1 and 2. As related work, Stoehr et al. (2023) train order-aware probes on the latent representations of language models to increase con- sistency, but they assume white-box model access, whereas we do not.
# âEqual contribution. | 2310.07712#4 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 5 | # âEqual contribution.
Next, we assess the effectiveness of permutation self-consistency, both theoretically and empirically. Theoretically, we prove that it recovers the true ranking under arbitrary noise distributions, with enough observations and at least one correctly or- dered pair in each observation. Experimentally, we apply our method to tasks in math and word sorting, sentence ordering, and passage reranking, consistently increasing the scores of GPT-3.5, GPT- 4, and LLaMA v2 (70B; Touvron et al., 2023) by up to 4â17%, 9â24%, and 8â16%, respectively. On TREC-DL19 and TREC-DL20 (Craswell et al., 2020, 2021), two passage ranking datasets, we establish the new state of the art. From this evi- dence on multiple tasks, we conclude that permuta- tion self-consistency improves listwise ranking in LLMs, which is partially influenced by positional bias, as shown in Section 3.2. | 2310.07712#5 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 6 | Finally, we conduct auxiliary analyses to justify our design choices. In Section 4.1, our hyperparam- eter study finds that quality quickly rises with the number of aggregated output rankings: the score improvement from using five aggregated rankings reaches 67% of twenty, on average, suggesting that a few suffice for quality gain. We further demon- strate that sampling temperature is ineffective for us, unlike the original self-consistency work (Wang et al., 2023b) in chain-of-thought reasoning, likely because listwise ranking does not require explo- ration of various reasoning paths.
Our contributions are as follows: (1) we propose a novel decoding technique for improving the qual- ity, consistency, and position invariance of black- box, listwise-ranking LLMs; (2) we empirically establish the new state of the art in passage rerank- ing and theoretically prove the robustness of our method to certain classes of ranking noise, includ- ing âlost-in-the-middleâ type ones; and (3) we pro- vide new analyses on positional biases in listwise- ranking LLMs, finding that these biases depend on pairwise positions of items in the list.
# 2 Our Approach
# 2.1 Preliminaries | 2310.07712#6 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 7 | # 2 Our Approach
# 2.1 Preliminaries
Notation. We define an n-ranking as a permu- tation o : {1,...,n} + {1,...,n}. For some sequence X := {X;}',, define X[o] as the per- muted sequence of X transformed by o, where X [0]; := X, i). Let the inversion vector of 7 be
inv(Ï)i := #{j : Ï(j) > Ï(i), j < i}.
To quantify dissimilarity, the Kendall tau dis- tance between two rankings a; and a2 is the num- ber of inversions in a! 009: n
inv(Ïâ1 dκ (Ï1, Ï2) := 1 ⦠Ï2)i. i=1 (2)
In other words, it is the number of pairwise dis- agreements, or discordant pairs, in the permutation ordering. The distance is one affine transform away from the Kendall tau correlation, used to measure list order similarity (Kendall, 1948):
2d,.(01, 02) (3) 2 (3) T(01,02) = 1In the extreme, Ï = 1 ââ Ï1 = Ï2, and Ï = â1 implies that one is the otherâs reverse. | 2310.07712#7 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 8 | # 2.2 Permutation Self-Consistency
How do we mitigate positional biases in listwise- ranking LLMs? We find inspiration in the self- consistency framework (Wang et al., 2023b), which improves quality and consistency in chain-of- thought prompting (Wei et al., 2022). The approach has two main stages: first, it samples multiple an- swers for an input prompt; then, it aggregates the sampled answers into a single, high-quality one, hence âmarginalizing outâ separate reasoning paths from the language model.
Unfortunately, self-consistency does not readily generalize to listwise ranking for a few reasons. For one, it is limited to point predictions, greatly simplifying the aggregation procedure to taking the majority vote. For another, sampling tempera- ture, the methodâs mainstay of generating diverse samples for aggregation, has little effect on (and at times harming) the quality of aggregated predic- tions in listwise ranking, as shown in Section 4.1. Lastly, self-consistency does not explicitly address positional bias, the central issue of our paper. | 2310.07712#8 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 9 | Nevertheless, its shuffleâaggregate paradigm is still a useful template. With it, we propose permu- tation self-consistency: for the first sample step, we randomly shuffle the list in the prompt to curate a diverse set of rankings, each with different position biases. For the next aggregate step, we compute the central ranking closest in Kendall tau distance to all the sampled rankings, which, like self-consistency, marginalizes out the independent variable (in the original, reasoning paths; in ours, prompt order). Intuitively, we intervene on list order, collect output rankings, then aggregate, breaking the association between individual list order and output rankings.
Task Example Input Prompt Math Sorting Sort these expressions: 3 / 2, 1 - 5, ... Sentence Ordering Order the shuffled sentences: [1] The... Passage Ranking Order these by relevance to the query, âwhat are shrews?â: [1] Cats hunt...
Table 1: Listwise-ranking input prompt examples. | 2310.07712#9 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 10 | Table 1: Listwise-ranking input prompt examples.
Formally, we are given an input sequence of items X := {Xi}n i=1, such as a list of passages, along with a listwise-ranking LLM h(X; s) that returns an n-ranking on some string prompt s; see Table 1 for an example. First, we construct a di- verse set of output rankings by randomly permuting X and passing it through the LLM, like how self- consistency uses temperature to vary their output. Specifically, we sample a sequence
ËÏi := h(X[Ïi]; s) for 1 ⤠i ⤠m, (4)
where Ïi is drawn uniformly at random from the set of all possible n-rankings. As noted previously, each output ranking has positional bias, but mis- takes are expected to differ among the outputs be- cause of our input order randomization. We then âmarginalize outâ these individual biases by aggre- gating the output rankings into a single central ranking. One method with attractive theoretical properties is the KemenyâYoung (Kemeny, 1959) optimal ranking of the outputsâthat is, the central ranking that minimizes the sum of its Kendall tau distances to every output ranking:
Â¯Ï := argmin dκ(ËÏi, Ï). Ï 1â¤iâ¤m (5) | 2310.07712#10 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 11 | Â¯Ï := argmin dκ(ËÏi, Ï). Ï 1â¤iâ¤m (5)
Our approach returns Â¯Ï as the prediction for X and terminates. Although this calculation is NP- hard, fast exact and approximate algorithms ex- ist (Conitzer et al., 2006; Ali and MeilËa, 2012), many implemented in our codebase. Passage reranking. The task of passage rank- ing ranks a set of provided passages in order of relevance to a given query. The use of permu- tation self-consistency for this case deserves spe- cial attention. Due to the LLM input length con- straint, predominant LLM-based approaches such as RankGPT (Sun et al., 2023), LRL (Ma et al., 2023), and RankVicuna (Pradeep et al., 2023) stride the LLM across fixed windows of items from the back of the list to the front, rather than output a ranking in a single pass. In this case, we simply ap- ply permutation self-consistency to each window.
# 2.3 Theoretical Guarantees | 2310.07712#11 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 12 | # 2.3 Theoretical Guarantees
We now show that for certain kinds of noisy rank- ings, the Kemeny ranking can recover the true rank- ing given enough observations. For example, if there always exists some random pair of items that are correctly ranked among randomly ordered ob- servations, we will converge to the true ranking. Definition 2.1. For two rankings Ï1 and Ï2, the concordant subset is a set Sâ² where âi and j â Sâ², Ï1(i) < Ï1(j) â§ Ï2(i) < Ï2(j) or Ï1(i) > Ï1(j) â§ Ï2(i) > Ï2(j). Proposition 2.1. Let there be a true ranking Ï and a sequence of noisy rankings ËÏ := {ËÏi}m i=1. Suppose each noisy ranking has a uniformly ran- dom, nonempty concordant subset Sâ² with Ï, and the remaining rank elements not in Sâ² represent a random permutation. Then the KemenyâYoung ranking Â¯Ï of ËÏ converges in probability to Ï, i.e., it is a consistent estimator. | 2310.07712#12 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 13 | Proof sketch. Let Aj; be the event that the sum of discordant pairs indexed by i and j across each ranking in & is greater than the number of con- cordant ones. P(Aj;;) is upper-bounded by O(). The union bound of PN, Aj;) shows that the probability of the sum of discordant pairs being greater than that of the concordant pairs vanishes for any pair as m approaches infinity. Thus, the Kemeny-optimal ranking will always approach for m â oo, concluding our proof.
To extend this result, we demonstrate that, in the presence of any arbitrary distribution of ranking noise (e.g., the hypothetical âlost-in-the-middleâ kind), characterized empirically in Section 3.2, our approach yields a consistent estimator for the true ranking, given that at least one possibly nonrandom pair of items is always concordant:
Proposition 2.2. Let there be a true ranking Ï, input ranking Ïin, and a ranking noise distribution P(Ïnoisy|Ïin), where Ïnoisy always has a (possibly nonuniform) nonempty concordant subset Sâ² with Ï. Then the permutation self-consistency procedure is a consistent estimator of Ï when applied to Ïin as the input and LLM parameterized by P(Ïnoisy|Ïin). | 2310.07712#13 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 14 | Proof sketch. Observe that the first shuffling stage of permutation self-consistency transforms the premises into those of Proposition 2.3. Since the next stage of the method involves the same KemenyâYoung ranking as the proposition does, the rest of the proof quickly follows.
1. MathSort: Sort ten arithmetic expressions by value.
Example: Sort the following expressions from smallest to largest: 3 / 5, 2 - 9, 6 * 5, 2 * 1, 3 / 1, 9 * 9, 1 - 9, 9 + 8, 3 / 5, 1 / 9. The output format should be a comma-separated list containing the exact expressions; do not reduce them. Only respond with the results; do not say any word or explain.
2. WordSort: Order ten words alphabetically.
Example: Order these words alphabetically: aaron, roam, aardvark, nexus, [...]. The output format should [...]
3. GSM8KSort: Unscramble sentences from GSM8K.
Example: Order the scrambled sentences logically: - She took 1 hour to walk the first 4 miles [...] - Marissa is hiking a 12-mile trail. - If she wants her average speed to be 4 [...] The output format should have each sentence on a new line. Only respond with the results; do not say any [...]
Table 2: Example prompts for our three sorting tasks.
# 3 Experiments | 2310.07712#14 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 15 | Table 2: Example prompts for our three sorting tasks.
# 3 Experiments
We conduct experiments on sorting and passage ranking, which constitute two distinct types of prob- lems in listwise ranking.
# 3.1 Sorting Tasks
Setup. We build three functionally distinct datasets called MathSort, WordSort, and GSM8KSort, cor- responding to numerical sorting, alphabetical order- ing, and sentence arrangement, respectively. For MathSort, the task is to sort ten random mathe- matical expressions of the form digit op digit, where digit is a single digit and op is one of +, -, *, or /. In WordSort, the goal is to order ten random English words alphabetically. Finally, GSM8KSort is a sentence-unscrambling task over the test set of the GSM8K reasoning dataset (Cobbe et al., 2021). For consistency and tractability, we use 100 exam- ples in each dataset; see Table 2 for prompts. | 2310.07712#15 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 16 | Although less practical than passage ranking, these synthetic sorting datasets have certain advan- tages. The items are intrinsically comparable, espe- cially in MathSort and WordSort, whose elements have unequivocal order (e.g., âaardvarkâ must pre- cede âabacusâ in WordSort). On the other hand, passage ranking relies on human judgment, where label noise may confound findings. Synthetic con- struction also enables control of item length: Math- Sort examples are fixed at three tokens, WordSort at a single word, and GSM8K one sentence.
For our LLMs, we choose the open family of LLaMA v2 models (Touvron et al., 2023) and the | 2310.07712#16 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 17 | For our LLMs, we choose the open family of LLaMA v2 models (Touvron et al., 2023) and the
Method MATHSORT WORDSORT GSM8KSORT Orig. PSC Orig. PSC Orig. PSC LLaMA2-7B 8.7 6.1 LLaMA2-13B 16.7 26.0 65.4 78.8 42.7 LLaMA2-70B 27.9 31.3 74.6 81.0 61.1 64.0 75.2 85.9 88.1 82.1 GPT-3.5 83.5 89.6 89.9 92.0 88.4 GPT-4 24.2 41.3 59.9 21.3 46.8 71.2 88.4 90.5
Table 3: Kendall tau correlation scores on our sorting tasks. Original scores are the median across 20 single runs, and PSC aggregates those 20. Underline indicates improvement from PSC and bold denotes best.
# x
# g
Individual Score Distribution vs. PSC
MathSort all w - « WordSort ® Our PSC oF nas - Hl GPT-3.5 GSM8kSort Ga GeT-4 âMi * 60 70 80 90 Tau Score | 2310.07712#17 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 18 | Figure 3: The distribution of sorting task scores from twenty individual runs plotted against our PSC score. Our PSC outperforms the best of any individual run.
closed GPT-3.5 (Turbo, the â0613â version) and GPT-4 from OpenAI, both the state of the art. We apply permutation self-consistency with m = 20 output rankings, resulting in 20 parallel calls to the LLM per example.
Results. We present our main results in Table 3, naming our method âPSCâ for short. PSC consis- tently outperforms conventional inference on all three datasets and five models by an average of 42% in Kendall tau correlation, with gains skewed toward the smaller LLaMA2 variants. Specifically, LLaMA2-7B, 13B, and 70B attain average score increases of 157%, 28%, and 12%, respectively, while GPT-3.5 and GPT-4 improve by 3â18% and 2â7%. We attribute this to the already high quality of the larger 70B and GPT models, which leave less room for improvement. We conclude that PSC improves listwise ranking on sorting tasks, with higher gains on lower-quality models. | 2310.07712#18 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 19 | One foreseeable question is whether any indi- vidual runs surpass PSC, which would weaken the case for rank aggregation. To answer this, we plot the distribution of the individual scores against PSC in Figure 3. We observe that PSC reliably beats all individual runs by 1â12%, improving the most on tasks and models with lower baseline quality, such as MathSort and GPT-3.5. These findings bolster the necessity of the aggregation step. | 2310.07712#19 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 20 | First Stage Top-k Method TREC-DL19 TREC-DL20 Original Our PSC Original Our PSC None All All (1) BM25 (2) SPLADE++ ED 50.58 73.08 â â 47.96 71.97 â â Supervised Approaches BM25 100 100 (3) MonoT5 (T5-3B) (4) RankT5 (T5-3B) 71.83 71.22 â â 68.89 69.49 â â Unsupervised Approaches BM25 100 100 100 20 20 100 100 (5) PRP-Best (FLAN-T5-XXL) (6) PRP-Best (FLAN-UL2) (7) RankVicuna (8) Single (GPT-3.5) (9) Single (GPT-4) (10) RankGPT (GPT-3.5) (11) RankGPT (GPT-4) 69.87 72.65 66.83 60.95 (60.96) 60.88 (60.92) 68.00 (68.13) 75.00 (75.59) â â 68.70 61.49 64.88 70.77 75.66 69.85 70.68 65.49 57.64 (57.68) 57.78 (57.89) 62.08 | 2310.07712#20 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 22 | Table 4: nDCG@10 results on TREC-DL19 and TREC-DL20. Scores in parentheses are the maximum across three runs, while those outside the median. Improvements from PSC are underlined and best per-section scores are bolded. According to the one-tailed signed-rank test, paired differences between the original and PSC are statistically significant at the 99% confidence level (p < 0.01).
# 3.2 Passage Reranking Task
For a more applied case, we evaluate our method on passage reranking. In this task, we are given a query and an initial list of relevant documents from a fast, first-stage retriever. We must then reorder these documents to improve their final relevance. | 2310.07712#22 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 23 | Setup. From the TREC Deep Learning Track, we select the two passage retrieval test sets from TREC-DL19 and TREC-DL20 (Craswell et al., 2020, 2021), both canon in the literature (Pradeep et al., 2023; Qin et al., 2023). These datasets are built on the MS MARCO v1 corpus (Bajaj et al., 2016), which contains 8.8 million passages. As is standard, we rerank the top-100 passages retrieved by the first-stage BM25 (Robertson et al., 2009) or SPLADE++ EnsembleDistill (ED; Formal et al., 2021), reporting nDCG@10 scores for quality. | 2310.07712#23 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 24 | Like the sorting tasks, we pick one open LLM, RankVicuna (Pradeep et al., 2023), fine-tuned from Vicuna-7B (Chiang et al., 2023), and one closed family, GPT-3.5 and GPT-4âall models are the present state of the art. RankVicuna and GPT-3.5 have matching context lengths of 4096, half of GPT-4âs 8192. We similarly apply permutation self- consistency with m = 20 runs. Furthermore, for three of our variants named âsingle,â we reduce the top-100 to 20 and discard the windowing strategy used in RankGPT and RankVicuna, described in Section 2.2. This allows us to fit all passages in a
single call and thus remove potentially confounding interactions between the windowing method and permutation self-consistency. | 2310.07712#24 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 25 | single call and thus remove potentially confounding interactions between the windowing method and permutation self-consistency.
For our supervised baselines, we report results from the MonoT5 (Nogueira et al., 2020) and RankT5 (Zhuang et al., 2023) models, based on the T5 language model (Raffel et al., 2020). For the unsupervised baselines, we copy figures from the state-of-the-art pairwise ranking results across the variants in Qin et al. (2023), which we name PRP-Best for short. Results. We present our results in Table 4. With PSC, we establish four state-of-the-art results: first, a new best in BM25 for DL19 (row 11), edging ahead of the prior record from RankGPT by 0.07 points; second, the same for DL20 (row 11), lead- ing PRP by 0.32 points (row 6); third, the overall top result on DL19 of 76.87 from SPLADE++ (row 13), outperforming the previous by 1.28 (row 11); and fourth, the state of the art of 78.52 on DL20 (row 13), a 3.79-point increase over the previous best from RankVicuna (row 12). | 2310.07712#25 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 26 | Overall, our PSC approach consistently im- proves ordinary decoding and beats the maximum individual score across three runs (see scores in parentheses), yielding gains on 13 out of 16 modelâ dataset combinations (see PSC columns in rows 7â14). On average, RankVicuna, GPT-3.5, and GPT-4 see relative score increases of 0.4%, 2%, and 5% with PSC. Mixed results on RankVicuna
Position ofthe Second gem, Fue) oan ae me i -9 Position of the Second Item, m(b) eh om 3 3 Sa = és. | =. 5 (=a 2 EI L, & 5 = 10- -6 © 10- 7 2 2 s Fa 5 - 5 oe o15- o15- 5 Fs Fs 3 a s a a 4 2 20- 2 20- [GPT-3.5] DL19 [GPT-3.5] DL20 | 2310.07712#26 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 27 | Position of the Second Item, mj(b) Position of the Second Item, m(b) 0 20 5 10 =e no : : : = me : : : E Ll a a, wo FE ee ag. â 5- â 5- 2 » 2 z z ah : = 10- -. ©10- 2 2 -8 s s ba 75 2 15- o15- i; Fs Fs & 20- 820. I. [GPT-4] DL19 [GPT-4] DL20
(a) Single (GPT-3.5) on DL19 and DL20. (b) Single (GPT-4) on DL19 and DL20.
Figure 4: Distribution of âreversionsâ after reranking. Blues are below the observed dataset average and reds above the average. For two input list positions i â [1, 20] and j â (i, 20], i indexes the rows and j the columns. For example, the cell at (1, 2) is the reversion of the first two input items across the dataset. Note that highly saturated colors indicate over- and under-reversion relative to other pairs in the dataset rather than in the absolute sense. | 2310.07712#27 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 28 | likely result from its inherent robustness to posi- tional bias, instilled by its training process that uses random shuffling as part of data augmentation; thus, the shuffling step from PSC has less effect.
sition pair, with Ïi(a) as the y-axis and Ïi(b) as the x-axis, whose positions range from 1â20 for each of the top-20 passages. For cross-model compara- bility, we normalize by dataset.
The choice of the first-stage reranker has a clear impact, with SPLADE++ adding an average of 7.26 points over the corresponding BM25 models. In fact, reranking the top-20 SPLADE items (row 13) in a single call outperforms doing the top-100 (row 14) using a sliding call window. We conjecture that this results from imperfections in the RankGPT windowing algorithm, which shows especially for strong retrievers, where the top-20 already contains many relevant documents. | 2310.07712#28 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 29 | Finally, we note one particularly intriguing phe- nomenon: in the top-20 single-call setting, GPT-3.5 and GPT-4 have similar baseline quality without PSC (rows 8 and 9, first column in each group), but PSC boosts GPT-4 more than GPT-3.5 (row 9, second columns). As we explore in depth next, this possibly results from GPT-4 being more âequally biasedâ across the item positions and hence provid- ing PSC more useful rankings for aggregation. | 2310.07712#29 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 30 | Positional bias analysis. We analyze how list or- der bias varies with the input positions on the âsin- gleâ GPT models for BM25 (from Table 3, rows 8 and 9), which avoid confounds from RankGPTâs window strategy. The design of our analysis is as follows, with notation mirroring Section 2.2: consider the item pair (Xa, Xb) with input list posi- tions (Ïi(a), Ïi(b)), where Ïi(a) < Ïi(b) for some random permutation Ïi. If the output positions satisfy ËÏi(a) > ËÏi(b) after reranking, we say the order is reversed, and we call the sum of reversed pairs per data point âreversions.â In Figure 4, we visualize the distribution of reversions by input poUnder the null hypothesis of no positional bias, the distribution of reversions should be uniform be- cause the input lists are randomly permuted, which severs any association between input order and out- put ranking. However, Figure 4 contradicts this. Prominently, the center of Figure 4a is redder than the edges, indicating that pairs with both items closer to the middle are reversed | 2310.07712#30 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 31 | 4 contradicts this. Prominently, the center of Figure 4a is redder than the edges, indicating that pairs with both items closer to the middle are reversed more often by GPT-3.5 than those at the start and the end of in- put lists. In Figure 4b, bottom areas are also more red than the top, showing that pairs with items at the end of the list are more frequently reversed by GPT-4 than pairs at the start are. | 2310.07712#31 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 32 | Other subtle patterns emerge upon examination. First, in Figure 4a, a dark block appears after col- umn 15, suggesting that GPT-3.5 does not focus well on items past the fifteenth. Second, the colors interleave in a grid pattern across both columns and rowsâpossibly an artifact of its pretraining. We conclude that different positional biases exist in reranking LLMs, varying by model and dataset.
The analysis also helps to explain our prior exper- imental results. Comparing Figure 4a and 4b, we observe that GPT-4 generally reverses more pairs than GPT-3.5 and is closer to the optimal number of reversals, thus providing higher quality to the aggregated rankings. This may explain why PSC benefits GPT-4 (single) more than it does GPT-3.5 (single), i.e. row 9 vs. row 8 in Table 4. Similarly, both models tend to reverse more pairs on DL20 than on DL19, and results also indicate that PSC improves DL20 more than it does DL19. | 2310.07712#32 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 33 | Quality vs. m Rankings (GPT-3.5) Quality vs. m Rankings (GPT-4) 2 2 = 20 ia 2 5 -4 2 6 âeâ WordSort 8 âeâ MathSort ° -8 âeâ GSM8KSort $ âeâ TREC-DL19 a 10 âeâ TREC-DL20 1 5 10 1 20 1 5 10 15 20 m Rankings m Rankings (a) Quality vs. number of output rankings (p = 0.17). Quality vs. Temp. (GPT-3.5) Quality vs. Temp. (GPT-4) 4 0 SSS SS i N â*â WordSort âeâ MathSort -6 âeâ GSM8KSort âeâ TREC-DL19 âeâ TREC-DL20 i ES i a Score Change wrt 0 Temp. I cy I S -10 0.75 00 02 04 06 Temperature 0.00 0.25 Temperature 0.50 (b) Quality vs. text generation temperature (p = â0.078).
(a) Quality vs. number of output rankings (Ï = 0.17).
(b) Quality vs. text generation temperature (Ï = â0.078). | 2310.07712#33 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 34 | (a) Quality vs. number of output rankings (Ï = 0.17).
(b) Quality vs. text generation temperature (Ï = â0.078).
Figure 5: Quality across all datasets for various choices of aggregate size and temperature. For output rankings, we use m = 20 as our frame of reference; for temperature, 0.0. In the subfigure captions, Ï denotes Spearmanâs rho.
# 4 Sensitivity Analyses
In this section, we investigate the importance of each component of permutation self-consistency to justify our modeling choices.
# 4.1 Hyperparameter Studies
Aggregation Method Quality (GPT-3.5) Aggregation Method Quality (GPT-4)
90 mmm Single Best jams RRF mm Kemeny | Math Word GSM8K DL19 DL20 Task 80 : | | ba l 40 lll « Math Word GSM8K DL19 DL20 Task Score 3 Score g 8 | 2310.07712#34 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 35 | Output rankings. Throughout the paper, we es- poused aggregating over m = 20 output rankings, but is more actually better? If, say, five outper- forms twenty, we could decrease the number of parallel calls to the model, conceivably saving cost. To answer this question, we sweep the aggregate size between one and twenty across all datasets, plotting the resulting score differences from using the default twenty. We pick GPT-3.5 and GPT-4 as our target models, as they are used in all tasks.
We plot our results in Figure 5a. On both models, we find that output quality rapidly converges to that of using the full twenty, five being 67% as effective on average. The score averages increase monotonically with the number of rankings (Ï = 0.17), with GSM8KSort on GPT-3.5 as an outlier (left subplot), possibly because of output varianceâ the next study on sampling temperature shows that it is highly sensitive to randomness. We conclude that picking m = 20 output rankings is effective, though returns sharply diminish after 5â10. | 2310.07712#35 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 36 | Sampling temperature. Self-consistency (Wang et al., 2023b) uses temperature as their sampling strategy to produce different outputs to aggregate over, but it is ineffective for us, perhaps because listwise ranking does not admit multiple reasoning paths like chain-of-thought prompting does. To assess this rigorously, we vary the temperature be- tween 0 and 0.75, following the original methodâs 0.5â0.7 (Wang et al., 2023b). For consistency, we use the same setup from before and fix m = 20.
Figure 6: Scores for the alternative reciprocal rank fu- sion (RRF) and our Kemeny rank aggregation method.
We plot our results in Figure 5b. Temperature has little effect on the quality (Ï = â0.078), again with GSM8KSort as an outlier, where the extra ran- domness drastically hurts quality on both models. This sensitivity to randomness is also evident in Figure 3, where GSM8K has the widest interquar- tile range of the tasks. In conclusion, this evidence grounds our choice of not using temperature.
# 4.2 Rank Aggregation Comparison
Reciprocal rank fusion (RRF; Cormack et al., 2009) is a state-of-the-art alternative to our chosen Ke- meny ranking method. It sorts items by the score | 2310.07712#36 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 37 | 1 RRFScore(X;) := ââ___ ae 7c) (6)
for each item Xj, rankings ËÏi, and k = 60. RRF had been under our consideration, but we picked Kemeny ranking for its theoretical robustness and empirical effectiveness. Shown in Figure 6, Ke- meny beats RRF (p < 0.05) on 8 out of 10 compar- isons by a mean of 0.23 points; on average, RRF reaches only 93.5% of the boost that Kemeny does. Its only outperformance on DL19 possibly results from it being suited for information retrieval, its field of origin, but may also be statistical noise. Overall, these results further support our decision to select Kemeny ranking for the aggregation step.
# 5 Related Work
The holistic direction of our work is in enhancing the ranking ability of large language models. Most closely, contrast-consistent ranking (Stoehr et al., 2023) proposes to train order-enforcing probes on the latent vectors of large language models for im- proving rank consistency. We differentiate our method by not presuming access to model inter- nals, which is becoming increasingly common with closed source but academically interesting LLMs such as GPT-4. | 2310.07712#37 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 38 | The specific empirical tasks in this paper have also seen recent progress. For passage ranking us- ing language models, BERT-based (Devlin et al., 2019; Nogueira et al., 2020) and T5-tuned (Zhuang et al., 2023; Raffel et al., 2020) approaches rep- resent the earliest language models for passage ranking. RankGPT (Sun et al., 2023) spearheaded much of the post-ChatGPT work, beating the su- pervised state of the art with an unsupervised LLM for the first time. Concurrently, LRL (Ma et al., 2023) reached the same conclusions using a similar method on GPT-3. Along a non-listwise direction, PRP (Qin et al., 2023) represents a pairwise method leveraging open-source large language models, as reported in Table 4.
Our secondary sorting tasks for LLMs, while less practical, have had attention as well, mostly in the context of evaluation, with BigBench (Suzgun et al., 2022) providing more than 200 distinct tasks, including one in alphabetical ordering,1 which we enlarge and expand on in WordSort. Stoehr et al. (2023) also constructed synthetic sorting datasets for evaluating listwise ranking, but they are private and hence uncomparable. | 2310.07712#38 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 39 | We are not the first to establish positional biases in LLMs in general. Lu et al. (2022) are among the earliest to relate prompt order to the quality of in-context learning. Recently, Liu et al. (2023) and Wang et al. (2023a) characterized positional bias in the context of list-oriented tasks, such as ques- tion answering and response evaluation. However, we are to our knowledge the first to characterize the position biases of passage-ranking LLMs with respect to pairwise item positions.
Lastly, our paper is connected to all the meta- algorithms for improving LLM generation. As a pertinent example, Lu et al. (2022) study prompt order on in-context learning classification tasks,
1https://github.com/google/BIG-bench/tree/main/ bigbench/benchmark_tasks/word_sorting
proposing an entropy-based statistic over develop- ment sets to find performant permutations. Ag- garwal et al. (2023) make self-consistency more efficient, halting the procedure when enough sam- ples have been collected. To keep our method in its simplest form, as self-consistency had not been applied to listwise ranking to begin with, we based our design on the original (Wang et al., 2023b).
# 6 Conclusions and Future Work | 2310.07712#39 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 40 | # 6 Conclusions and Future Work
In the present work, we introduce permutation self- consistency, a novel decoding method to improve the ranking ability of black-box LLMs by mitigat- ing potential sensitivities and biases to list item order. We intervene on prompt list order to pro- duce multiple rankings then return an aggregated statistic as the prediction, which intuitively has less association with the controlled variable, prompt list order. Theoretically, we prove the robustness of our method to arbitrary, fixed noise distributions under certain conditions. Empirically, our method consistently improves upon ordinary decoding on all 15 of our sorting modelâdataset combinations and 13 out of 16 of our passage reranking ones. Further analyses indicate the positional biases in the reordering process of input rankings. Finally, our sensitivity analyses justify our design choices of 20 output rankings, zero sampling temperature, and the Kemeny ranking method.
In the future, permutation self-consistency can plausibly be applied to any list-oriented task, re- gardless of whether the underlying LLM is openly available. Examples include using LLMs for evalu- ation (Wang et al., 2023a) and annotating human- feedback judgments with LLMs. Another future step is to relax or reformulate our method to be differentiable, enabling training-time application in, say, RankVicuna (Pradeep et al., 2023).
# Limitations | 2310.07712#40 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 41 | # Limitations
We share the same limitations as those of the origi- nal self-consistency paper (Wang et al., 2023b). We use multiple LLM calls, potentially to a commer- cial LLM, which would raise financial cost. Thus, practical applications may require careful weighing of quality gain against elevated expense. Neverthe- less, a few calls already help, and returns rapidly diminish past 5â10 calls. We note that our method does not in practice increase latency by much, since all calls can be parallelized, and aggregation time does not rise with the number of samples.
# References
Pranjal Aggarwal, Aman Madaan, Yiming Yang, et al. 2023. Letâs sample step by step: Adaptive- consistency for efficient reasoning with LLMs. arXiv:2305.11860.
Alnur Ali and Marina MeilËa. 2012. Experiments with Kemeny ranking: What works when? Mathematical Social Sciences.
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, An- drew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv:1611.09268. | 2310.07712#41 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 42 | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing GPT-4 with 90%* Chat- GPT quality.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv:2110.14168.
Vincent Conitzer, Andrew Davenport, and Jayant Kalagnanam. 2006. Improved bounds for computing Kemeny rankings. In Proceedings of the 21st Na- tional Conference on Artificial Intelligence (Volume 1).
Gordon V. Cormack, Charles Clarke, and Stefan Buettcher. 2009. Reciprocal rank fusion outperforms Condorcet and individual rank learning methods. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in informa- tion retrieval. | 2310.07712#42 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 43 | Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv:2102.07662.
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
Thibault Formal, Carlos Lassance, Benjamin Pi- wowarski, and Stéphane Clinchant. 2021. SPLADE v2: Sparse lexical and expansion model for informa- tion retrieval. arXiv:2109.10086.
John G. Kemeny. 1959. Mathematics without numbers. Daedalus.
Maurice George Kendall. 1948. Rank correlation meth- ods. | 2310.07712#43 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 44 | John G. Kemeny. 1959. Mathematics without numbers. Daedalus.
Maurice George Kendall. 1948. Rank correlation meth- ods.
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paran- jape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the middle: How language models use long contexts. arXiv:2307.03172.
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming few- shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers).
Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and listwise docu- Zero-shot reranking with a large language model. Jimmy Lin. 2023. ment arXiv:2305.02156.
Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre- In Findings trained sequence-to-sequence model. of the Association for Computational Linguistics: EMNLP 2020. | 2310.07712#44 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 45 | Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023. RankVicuna: Zero-shot listwise docu- ment reranking with open-source large language mod- els. arXiv:2309.15088.
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Don- ald Metzler, Xuanhui Wang, et al. 2023. Large lan- guage models are effective text rankers with pairwise ranking prompting. arXiv:2306.17563.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. The Journal of Machine Learning Research.
Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: BM25 and be- yond. Foundations and Trends in Information Re- trieval. | 2310.07712#45 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 46 | Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: BM25 and be- yond. Foundations and Trends in Information Re- trieval.
Niklas Stoehr, Pengxiang Cheng, Jing Wang, Daniel Preotiuc-Pietro, and Rajarshi Bhowmik. 2023. Unsu- pervised contrast-consistent ranking with language models. arXiv:2309.06991.
Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT good at search? Investigating large language models as re-ranking agent. arXiv:2304.09542.
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, et al. 2022. Challenging BIG-Bench tasks and whether chain-of-thought can solve them. arXiv:2210.09261. | 2310.07712#46 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 47 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288.
Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023a. Large language models are not fair evaluators. arXiv:2305.17926.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowd- hery, and Denny Zhou. 2023b. Self-consistency im- proves chain of thought reasoning in language mod- els. In The Eleventh International Conference on Learning Representations.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V. Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in Neural Information Processing Systems. | 2310.07712#47 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.07712 | 48 | Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv:2303.18223.
Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. 2023. RankT5: Fine-tuning T5 for text ranking with ranking losses. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. | 2310.07712#48 | Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models | Large language models (LLMs) exhibit positional bias in how they use context,
which especially complicates listwise ranking. To address this, we propose
permutation self-consistency, a form of self-consistency over ranking list
outputs of black-box LLMs. Our key idea is to marginalize out different list
orders in the prompt to produce an order-independent ranking with less
positional bias. First, given some input prompt, we repeatedly shuffle the list
in the prompt and pass it through the LLM while holding the instructions the
same. Next, we aggregate the resulting sample of rankings by computing the
central ranking closest in distance to all of them, marginalizing out prompt
order biases in the process. Theoretically, we prove the robustness of our
method, showing convergence to the true ranking in the presence of random
perturbations. Empirically, on five list-ranking datasets in sorting and
passage reranking, our approach improves scores from conventional inference by
up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B), surpassing the previous
state of the art in passage reranking. Our code is at
https://github.com/castorini/perm-sc. | http://arxiv.org/pdf/2310.07712 | Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, Ferhan Ture | cs.CL, cs.LG | First two authors contributed equally; 10 pages, 6 figures | null | cs.CL | 20231011 | 20231011 | [
{
"id": "2305.17926"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1611.09268"
},
{
"id": "2109.10086"
},
{
"id": "2303.18223"
},
{
"id": "2102.07662"
},
{
"id": "2307.03172"
},
{
"id": "2309.06991"
},
{
"id": "2305.02156"
},
{
"id": "2309.15088"
},
{
"id": "2306.17563"
},
{
"id": "2307.09288"
},
{
"id": "2305.11860"
},
{
"id": "2003.07820"
}
] |
2310.06825 | 1 | Abstract
We introduce Mistral 7B, a 7âbillion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms the best open 13B model (Llama 2) across all evaluated benchmarks, and the best released 34B model (Llama 1) in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B â Instruct, that surpasses Llama 2 13B â chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license. Code: https://github.com/mistralai/mistral-src Webpage: https://mistral.ai/news/announcing-mistral-7b/
# Introduction | 2310.06825#1 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 2 | # Introduction
In the rapidly evolving domain of Natural Language Processing (NLP), the race towards higher model performance often necessitates an escalation in model size. However, this scaling tends to increase computational costs and inference latency, thereby raising barriers to deployment in practical, real-world scenarios. In this context, the search for balanced models delivering both high-level performance and efficiency becomes critically essential. Our model, Mistral 7B, demonstrates that a carefully designed language model can deliver high performance while maintaining an efficient inference. Mistral 7B outperforms the previous best 13B model (Llama 2, [26]) across all tested benchmarks, and surpasses the best 34B model (LLaMa 34B, [25]) in mathematics and code generation. Furthermore, Mistral 7B approaches the coding performance of Code-Llama 7B [20], without sacrificing performance on non-code related benchmarks.
Mistral 7B leverages grouped-query attention (GQA) [1], and sliding window attention (SWA) [6, 3]. GQA significantly accelerates the inference speed, and also reduces the memory requirement during decoding, allowing for higher batch sizes hence higher throughput, a crucial factor for real-time applications. In addition, SWA is designed to handle longer sequences more effectively at a reduced computational cost, thereby alleviating a common limitation in LLMs. These attention mechanisms collectively contribute to the enhanced performance and efficiency of Mistral 7B. | 2310.06825#2 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 3 | Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B â Chat model.
Mistral 7B takes a significant step in balancing the goals of getting high performance while keeping large language models efficient. Through our work, our aim is to help the community create more affordable, efficient, and high-performing language models that can be used in a wide range of real-world applications.
# 2 Architectural details
The cat sat on the The cat sat on the window size â_ââ> The cat sat on the Vanilla Attention Sliding Window Attention Effective Context Length | 2310.06825#3 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 4 | # 2 Architectural details
The cat sat on the The cat sat on the window size â_ââ> The cat sat on the Vanilla Attention Sliding Window Attention Effective Context Length
Figure 1: Sliding Window Attention. The number of operations in vanilla attention is quadratic in the sequence length, and the memory increases linearly with the number of tokens. At inference time, this incurs higher latency and smaller throughput due to reduced cache availability. To alleviate this issue, we use sliding window attention: each token can attend to at most W tokens from the previous layer (here, W = 3). Note that tokens outside the sliding window still influence next word prediction. At each attention layer, information can move forward by W tokens. Hence, after k attention layers, information can move forward by up to k à W tokens.
Mistral 7B is based on a transformer architecture [27]. The main parameters of the architecture are summarized in Table 1. Compared to Llama, it introduces a few changes that we summarize below.
§=£ââââââââââââ_ Parameter
# Parameter
# Value | 2310.06825#4 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 5 | §=£ââââââââââââ_ Parameter
# Parameter
# Value
Sliding Window Attention. SWA exploits the stacked layers of a trans- former to attend information beyond the window size W . The hidden state in position i of the layer k, hi, attends to all hidden states from the previous layer with positions between i â W and i. Recursively, hi can access tokens from the input layer at a distance of up to W Ã k tokens, as illustrated in Figure 1. At the last layer, using a window size of W = 4096, we have a theoretical attention span of approximately 131K tokens. In practice, for a sequence length of 16K and W = 4096, changes made to FlashAttention [11] and xFormers [18] yield a 2x speed improvement over a vanilla attention baseline. dim n_layers head_dim hidden_dim n_heads n_kv_heads window_size context_len vocab_size 4096 32 128 14336 32 8 4096 8192 32000 Table 1: Model architecture. | 2310.06825#5 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 6 | Rolling Buffer Cache. A fixed attention span means that we can limit our cache size using a rolling buffer cache. The cache has a fixed size of W , and the keys and values for the timestep i are stored in position i mod W of the cache. As a result, when the position i is larger than W , past values in the cache are overwritten, and the size of the cache stops increasing. We provide an illustration in Figure 2 for W = 3. On a sequence length of 32k tokens, this reduces the cache memory usage by 8x, without impacting the model quality.
1https://github.com/mistralai/mistral-src 2https://github.com/skypilot-org/skypilot 3https://huggingface.co/mistralai
2
Timestep i Timestep i+ 1 Timestep i+ 2 This is an example of ... Mistral is a good ... The cat sat on the mat ...
Figure 2: Rolling buffer cache. The cache has a fixed size of W = 4. Keys and values for position i are stored in position i mod W of the cache. When the position i is larger than W , past values in the cache are overwritten. The hidden state corresponding to the latest generated tokens are colored in orange. | 2310.06825#6 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 7 | Pre-fill and Chunking. When generating a sequence, we need to predict tokens one-by-one, as each token is conditioned on the previous ones. However, the prompt is known in advance, and we can pre-fill the (k, v) cache with the prompt. If the prompt is very large, we can chunk it into smaller pieces, and pre-fill the cache with each chunk. For this purpose, we can select the window size as our chunk size. For each chunk, we thus need to compute the attention over the cache and over the chunk. Figure 3 shows how the attention mask works over both the cache and the chunk.
The cat sat on the mat and saw the dog go to
the dog go to
# Past
Cache
Current
Figure 3: Pre-fill and chunking. During pre-fill of the cache, long sequences are chunked to limit memory usage. We process a sequence in three chunks, âThe cat sat onâ, âthe mat and sawâ, âthe dog go toâ. The figure shows what happens for the third chunk (âthe dog go toâ): it attends itself using a causal mask (rightmost block), attends the cache using a sliding window (center block), and does not attend to past tokens as they are outside of the sliding window (left block).
# 3 Results | 2310.06825#7 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 8 | # 3 Results
We compare Mistral 7B to Llama, and re-run all benchmarks with our own evaluation pipeline for fair comparison. We measure performance on a wide variety of tasks categorized as follow:
⢠Commonsense Reasoning (0-shot): Hellaswag [28], Winogrande [21], PIQA [4], SIQA [22], OpenbookQA [19], ARC-Easy, ARC-Challenge [9], CommonsenseQA [24]
⢠World Knowledge (5-shot): NaturalQuestions [16], TriviaQA [15]
⢠Reading Comprehension (0-shot): BoolQ [8], QuAC [7]
⢠Math: GSM8K [10] (8-shot) with maj@8 and MATH [13] (4-shot) with maj@4
⢠Code: Humaneval [5] (0-shot) and MBPP [2] (3-shot)
⢠Popular aggregated results: MMLU [12] (5-shot), BBH [23] (3-shot), and AGI Eval [29] (3-5-shot, English multiple-choice questions only) | 2310.06825#8 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 9 | Detailed results for Mistral 7B, Llama 2 7B/13B, and Code-Llama 7B are reported in Table 2. Figure 4 compares the performance of Mistral 7B with Llama 2 7B/13B, and Llama 1 34B4 in different categories. Mistral 7B surpasses Llama 2 13B across all metrics, and outperforms Llama 1 34B on most benchmarks. In particular, Mistral 7B displays a superior performance in code, mathematics, and reasoning benchmarks.
4Since Llama 2 34B was not open-sourced, we report results for Llama 1 34B.
3
jm Mistral 7B = mm LLaMA2 138 50 lm Mistral 7B mm LLaMA2 138 mmm LlaMA278 lm LLaMA1 348 bel mmm LlaMA2 78 mem LlaMA 1348 70 40 vt = = eo g 7 = 330 £ g gs0 : < <20 40 10 ay MMLU Knowledge Reasoning Comprehension AGI Eval Math BBH Code | 2310.06825#9 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 11 | Model Modality MMLU HellaSwag WinoG PIQA Arc-e Arc-c NQ TriviaQA HumanEval MBPP MATH GSM8K 77.1% 69.5% 77.9% 68.7% 43.2% 24.7% 63.8% LLaMA 2 7B LLaMA 2 13B Pretrained 55.6% 80.7% 72.9% 80.8% 75.2% 48.8% 29.0% 69.6% Pretrained 44.4% 11.6% 18.9% 26.1% 3.9% 16.0% 35.4% 6.0% 34.3% Code-Llama 7B Finetuned 36.9% 62.9% 62.3% 72.8% 59.4% 34.5% 11.0% 34.9% 31.1% 52.5% 5.2% 20.8% Mistral 7B Pretrained 60.1% 81.3% 75.3% 83.0% 80.0% 55.5% 28.8% 69.9% 30.5% 47.5% 13.1% 52.2% | 2310.06825#11 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 12 | Table 2: Comparison of Mistral 7B with Llama. Mistral 7B outperforms Llama 2 13B on all metrics, and approaches the code performance of Code-Llama 7B without sacrificing performance on non-code benchmarks.
Size and Efficiency. We computed âequivalent model sizesâ of the Llama 2 family, aiming to understand Mistral 7B modelsâ efficiency in the cost-performance spectrum (see Figure 5). When evaluated on reasoning, comprehension, and STEM reasoning (specifically MMLU), Mistral 7B mirrored performance that one might expect from a Llama 2 model with more than 3x its size. On the Knowledge benchmarks, Mistral 7Bâs performance achieves a lower compression rate of 1.9x, which is likely due to its limited parameter count that restricts the amount of knowledge it can store.
Evaluation Differences. On some benchmarks, there are some differences between our evaluation protocol and the one reported in the Llama 2 paper: 1) on MBPP, we use the hand-verified subset 2) on TriviaQA, we do not provide Wikipedia contexts.
# Instruction Finetuning | 2310.06825#12 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 13 | # Instruction Finetuning
To evaluate the generalization capabilities of Mistral 7B, we fine-tuned it on instruction datasets publicly available on the Hugging Face repository. No proprietary data or training tricks were utilized: Mistral 7B â Instruct model is a simple and preliminary demonstration that the base model can easily be fine-tuned to achieve good performance. In Table 3, we observe that the resulting model, Mistral 7B â Instruct, exhibits superior perfor- mance compared to all 7B models on MT-Bench, and is comparable to 13B â Chat models. An independent human evaluation was conducted on https://llmboxing.com/leaderboard.
Model Chatbot Arena ELO Rating MT Bench WizardLM 13B v1.2 Mistral 7B Instruct Llama 2 13B Chat Vicuna 13B Llama 2 7B Chat Vicuna 7B Alpaca 13B 1047 1031 1012 1041 985 997 914 7.2 6.84 +/- 0.07 6.65 6.57 6.27 6.17 4.53
Table 3: Comparison of Chat models. Mistral 7B â Instruct outperforms all 7B models on MT-Bench, and is comparable to 13B â Chat models. | 2310.06825#13 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 14 | Table 3: Comparison of Chat models. Mistral 7B â Instruct outperforms all 7B models on MT-Bench, and is comparable to 13B â Chat models.
In this evaluation, participants were provided with a set of questions along with anonymous responses from two models and were asked to select their preferred response, as illustrated in Figure 6. As of October 6, 2023, the outputs generated by Mistral 7B were preferred 5020 times, compared to 4143 times for Llama 2 13B.
4
âe LlaMA2 âe- LLaMA2 65) = Mistral 70; = Mistral a = |. 60; & inal = 268 3 ⬠= 55 8 = § 66 50 « Effective LLaMA 64 Effective LlaMA 451 ¢ i size 23B (3.3x)___ : __size 38B (5.4x)_{ : 7 13 34 70 7 13 34 70 Model size (billion parameters) = Model size (billion parameters) 70) âeâ LLaMA 2 âe- LLaMA2 65) = Mistral Zee} = Mistral FS < 2 60 364, 3 5 2 2 B55 £62 é 5 & fa â50 5 2 60 a LlaMA e LLaMA 45 ize 9x) si B (3x fi 13 34 70 7 13 34 70 Model size (billion parameters) Model size (billion parameters) | 2310.06825#14 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 15 | Figure 5: Results on MMLU, commonsense reasoning, world knowledge and reading comprehension for Mistral 7B and Llama 2 (7B/13B/70B). Mistral 7B largely outperforms Llama 2 13B on all evaluations, except on knowledge benchmarks, where it is on par (this is likely due to its limited parameter count, which limits the amount of knowledge it can compress).
# 5 Adding guardrails for front-facing applications
The ability to enforce guardrails when it comes to AI generation is important for front-facing appli- cations. In this section, we highlight how to leverage system prompting to optionally enforce output constraints on top of our models. Additionally, we showcase the ability of Mistral 7B to perform fine-grained content moderation, which can be useful to enforce quality content in applications.
# 5.1 System prompt to enforce guardrails
We introduce a system prompt (see below) to guide the model to generate answers within specified guardrails, similar to the work done with Llama 2. Using this prompt allows the user to move on the Pareto front of model utility / guardrails enforcement, as indicated in Table 4.
Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity. | 2310.06825#15 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 16 | Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity.
We use a set of 175 unsafe prompts for evaluating safety. With the recommended system prompt, the model properly declines to answer 100% of the harmful questions.
As an illustration, we provide in Table 5 the answers of both Mistral 7B â Instruct and Llama 2 Chat 13B to the question How to kill a linux process with system prompts activated. We observe that Mistral 7B provides a correct response while Llama 2 declines to answer. Note that on this specific question, both models answer correctly when system prompts are deactivated.
Guardrails MT Bench No system prompt Llama 2 system prompt Mistral system prompt 6.84 ± 0.07 6.38 ± 0.07 6.58 ± 0.05
Table 4: System prompts. Mean official MT Bench score over 10 iterations with standard deviation for Mistral 7B â Instruct. For reference, Llama 2 13B â Chat reports official results of 6.65.
# 5.2 Content moderation with self-reflection | 2310.06825#16 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 17 | # 5.2 Content moderation with self-reflection
Mistral 7B â Instruct can be used as a content moderator: the model itself is able to accurately classify a user prompt or its generated answer as being either acceptable or falling into one of the following categories: Illegal activities such as terrorism, child abuse or fraud; Hateful, harassing or violent content such as discrimination, self-harm or bullying; Unqualified advice for instance in legal, medical or financial domains.
5
Model Answer Mistral 7B â Instruct with Mistral system prompt Llama 2 13B â Chat with Llama 2 system prompt As a helpful and respectful assistant, I cannot provide information on how to kill a Linux process. It is not appropriate or ethical to intentionally harm or terminate processes, as it can cause damage to the system and potentially result in data loss. Additionally, it is important to respect the autonomy and security of other usersâ processes, as they may be critical to their work or personal activities. Instead, I suggest that you explore other options to address any issues or concerns you may have.
Table 5: Comparison between Mistral and Llama system prompts. Mistral provides the right answer whereas Llama 2 declines to answer to the question How to kill a linux process. | 2310.06825#17 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 18 | Table 5: Comparison between Mistral and Llama system prompts. Mistral provides the right answer whereas Llama 2 declines to answer to the question How to kill a linux process.
To do so, we designed a self-reflection prompt that makes Mistral 7B classify a prompt or a generated answer. We evaluated self-reflection on our manually curated and balanced dataset of adversarial and standard prompts and got a precision of 99.4% for a recall of 95.6% (considering acceptable prompts as positives).
The use cases are vast, from moderating comments on social media or forums to brand monitoring on the internet. In particular, the end user is able to select afterwards which categories to effectively filter based on their particular use-case.
# 6 Conclusion
Our work on Mistral 7B demonstrates that language models may compress knowledge more than what was previously thought. This opens up interesting perspectives: the field has so far put the emphasis on scaling laws in 2 dimensions (directly associating model capabilities to training cost, as in [14]); the problem is rather 3 dimensional (model capabilities, training cost, inference cost), and much remains to be explored to obtain the best performance with the smallest possible model.
# Acknowledgements | 2310.06825#18 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 19 | # Acknowledgements
We are grateful to CoreWeave for their 24/7 help in marshalling our cluster. We thank the CINECA/EuroHPC team, and in particular the operators of Leonardo, for their resources and help. We thank the maintainers of FlashAttention, vLLM, xFormers, Skypilot for their precious assistance in implementing new features and integrating their solutions into ours. A huge thanks to Tri Dao and Daniel Haziza for helping include Mistral related changes to FlashAttention and xFormers on a tight schedule. We thank the teams of Hugging Face, AWS, GCP, Azure ML for their intense help in making our model compatible everywhere.
6 | 2310.06825#19 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 20 | 6
lama 213b CO Oniistral?b 1 Health Remaining 5 Health Remaining Can you recommend a foundational and accessible book about quantum physics? Round Winner: mistral-7b-instruct-v0.1 [email protected] #& Round Winner & A great book for a foundational understanding of quantum physics is "The Quantum Universe: Everything that Can Happen Does Happen" by Brian Cox and Jeff Forshaw. This book is accessible and easy to understand, making it a great choice for beginners. It covers all the basics of quantum mechanics, including superposition, entanglement, and the uncertainty principle. The authors also explore the implications of quantum physics for our understanding of the universe, including the nature of time and space. Overall, "The Quantum Universe" is an excellent choice for anyone looking to gain a foundational understanding of this fascinating field Llama-2-13b-chat Certainly! For a foundational and accessible introduction to quantum physics, I highly recommend "The Feynman Lectures on Physics" by Richard Feynman. This classic text provides a clear and engaging overview of the principles and concepts of quantum mechanics, with minimal mathematical formalism and plenty of illustrations and examples. It's a great starting point for anyone looking to gain a deeper understanding of quantum physics without getting bogged down in too much technical detail. | 2310.06825#20 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 21 | Figure 6: Human evaluation of Mistral 7B â Instruct vs Llama 2 13B â Chat Example. An example of human evaluation from llmboxing.com. The question asks for recommendations of books in quantum physics. Llama 2 13B â Chat recommends a general physics book, while Mistral 7B â Instruct recommends a more relevant book on quantum physics and describes in the contents in more detail.
7
# References
[1] Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. arXiv preprint arXiv:2305.13245, 2023.
[2] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
[3] Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020. | 2310.06825#21 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 22 | [4] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about phys- ical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, 2020.
[5] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
[6] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
[7] Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. Quac: Question answering in context. arXiv preprint arXiv:1808.07036, 2018. | 2310.06825#22 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 23 | [8] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019.
[9] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
[10] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
[11] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems, 2022. | 2310.06825#23 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 24 | [12] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
[13] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021.
[14] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Thomas Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karén Simonyan, Erich Elsen, Oriol Vinyals, Jack Rae, and Laurent Sifre. An empirical analysis of compute-optimal large language model training. In Advances in Neural Information Processing Systems, volume 35, 2022. | 2310.06825#24 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 25 | [15] Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017.
[16] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453â466, 2019.
8
[17] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large lan- guage model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. | 2310.06825#25 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 26 | [18] Benjamin Lefaudeux, Francisco Massa, Diana Liskovich, Wenhan Xiong, Vittorio Caggiano, Sean Naren, Min Xu, Jieru Hu, Marta Tintore, Susan Zhang, Patrick Labatut, and Daniel Haziza. xformers: A modular and hackable transformer modelling library. https://github.com/ facebookresearch/xformers, 2022.
[19] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018.
[20] Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
[21] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99â106, 2021. | 2310.06825#26 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
2310.06825 | 27 | [22] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Com- monsense reasoning about social interactions. arXiv preprint arXiv:1904.09728, 2019. [23] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022.
[24] Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A ques- tion answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937, 2018.
[25] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. | 2310.06825#27 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | http://arxiv.org/pdf/2310.06825 | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | cs.CL, cs.AI, cs.LG | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | cs.CL | 20231010 | 20231010 | [
{
"id": "2302.13971"
},
{
"id": "1905.07830"
},
{
"id": "2305.13245"
},
{
"id": "1803.05457"
},
{
"id": "2103.03874"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "1808.07036"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2307.09288"
},
{
"id": "1811.00937"
},
{
"id": "1904.09728"
},
{
"id": "1905.10044"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "1904.10509"
},
{
"id": "2004.05150"
},
{
"id": "2308.12950"
},
{
"id": "1705.03551"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.