README / README.md
mfarre's picture
mfarre HF staff
Update README.md
e4e6cb7 verified
---
title: README
emoji: πŸ‘
colorFrom: purple
colorTo: green
sdk: static
pinned: false
---
# HuggingFaceTB
This is the home for smol models (SmolLM & SmolVLM) and high quality pre-training datasets. We released:
- [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu): a filtered version of FineWeb dataset for educational content, paper available [here](https://huggingface.co/papers/2406.17557).
- [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and 30M samples. It contains synthetic textbooks, blog posts, and stories, posts generated by Mixtral. Blog post available [here](https://huggingface.co/blog/cosmopedia).
- [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus): the pre-training corpus of SmolLM: **Cosmopedia v0.2**, **FineWeb-Edu dedup** and **Python-Edu**. Blog post available [here](https://huggingface.co/blog/smollm).
- [SmolLM2 models](https://huggingface.co/collections/HuggingFaceTB/smollm2-checkpoints-6723884218bcda64b34d7db9): a series of strong small models in three sizes: 135M, 360M and 1.7B
- [SmolVLM2](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct): a family of small **Video and Vision** models in three sizes: 2.2B, 500M and 256M. Blog post available [here](https://huggingface.co/blog/smolvlm2).
- [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath): the best public math pretraining dataset with 50B tokens of mathematical and problem solving data.
**News πŸ—žοΈ**
- **FineMath**: the best public math pretraining dataset with 50B tokens of mathematical and problem solving data https://huggingface.co/datasets/HuggingFaceTB/finemath
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/RvHjdlRT5gGQt5mJuhXH9.png" width="900"/>
</div>