olmo-2 / README.md
jan-ai's picture
Update README.md
b76f762 verified
---
license: other
pipeline_tag: text-generation
tags:
- cortex.cpp
---
## Overview
OLMo-2 is a series of Open Language Models designed to enable the science of language models. These models are trained on the Dolma dataset, with all code, checkpoints, logs (coming soon), and associated training details made openly available.
The OLMo-2 13B Instruct November 2024 is a post-trained variant of the OLMo-2 13B model, which has undergone supervised fine-tuning on an OLMo-specific variant of the Tülu 3 dataset. Additional training techniques include Direct Preference Optimization (DPO) and Reinforcement Learning from Virtual Rewards (RLVR), optimizing it for state-of-the-art performance across various tasks, including chat, MATH, GSM8K, and IFEval.
## Variants
| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [Olmo-2-7b](https://huggingface.co/cortexso/olmo-2/tree/7b) | `cortex run olmo-2:7b` |
| 2 | [Olmo-2-13b](https://huggingface.co/cortexso/olmo-2/tree/13b) | `cortex run olmo-2:13b` |
| 3 | [Olmo-2-32b](https://huggingface.co/cortexso/olmo-2/tree/32b) | `cortex run olmo-2:32b` |
## Use it with Jan (UI)
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
```bash
cortexhub/olmo-2
```
## Use it with Cortex (CLI)
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
```bash
cortex run olmo-2
```
## Credits
- **Author:** allenai
- **Converter:** [Homebrew](https://homebrew.ltd/)
- **Original License:** [Licence](https://choosealicense.com/licenses/apache-2.0/)
- **Papers:** [Paper](https://arxiv.org/abs/2501.00656)