| # Description of the Dataset | |
| This release integrates the entire data sequence utilized in the CrystalCoder training. It encompasses data sequences from the three pre-training stages, combining information from two prior works: the [SlimPajama dataset](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata), totaling approximately 1300 billion tokens. These tokens are distributed across three stages, each with distinct weights. | |
| ## Stage 1 | |
| During this initial stage, half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is utilized, equivalent to approximately 345 billion tokens. | |
| ## Stage 2 | |
| In the second stage, the remaining half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is employed, along with two epochs of [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata). For the StarCoder data, we apply [FIM augmentation](https://arxiv.org/abs/2207.14255) with an FIM rate of 0.9. The total token count for this stage is calculated as 0.5 * 690 + 2 * 291, resulting in 927 billion tokens. | |
| ## Stage 3 | |
| The third stage involves reusing Python and web-related data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata), including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens. | |
| # Primary Usage | |
| This dataset serves as the foundation for training CrystalCoder and supports further reproduction. | |
| # License: | |