File size: 427 Bytes
0159019
 
 
5ffe38a
 
 
 
1
2
3
4
5
6
7
---
license: mit
---

This repository contains a fastText pretraining data filter targeting the LAMBADA task, as discussed in the paper [Improving Pretraining Data Using Perplexity Correlations](https://arxiv.org/abs/2409.05816). This filter selects high-quality pretraining data based on correlations between LLM perplexity and downstream benchmark performance. 

Code: https://github.com/TristanThrush/perplexity-correlations