NeMo
English
nvidia
code
math
File size: 5,350 Bytes
bd8523a
 
10f805f
 
 
 
6bac5a2
10f805f
 
 
 
bd8523a
10f805f
 
 
 
 
 
 
 
 
6bac5a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10f805f
c7c00b3
10f805f
 
 
 
 
 
 
f6d2b51
10f805f
c7c00b3
 
f6d2b51
10f805f
c7c00b3
10f805f
c7c00b3
10f805f
c7c00b3
 
 
 
10f805f
c7c00b3
10f805f
 
 
 
 
 
6bac5a2
10f805f
 
 
c7c00b3
10f805f
 
 
 
6bac5a2
10f805f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: llama2
datasets:
- nvidia/OpenMathInstruct-1
language:
- en
library_name: nemo
tags:
- nvidia
- code
- math
---


# OpenMath-CodeLlama-7b-Python

OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks
executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1),
a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed
[Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model.

<table border="1">
  <tr>
    <td></td>
    <td colspan="2" style="text-align: center;">greedy</td>
    <td colspan="2" style="text-align: center;">majority@50</td>
  </tr>
  <tr>
    <td style="text-align: center;">model</td>
    <td style="text-align: center;">GSM8K</td>
    <td style="text-align: center;">MATH</td>
    <td style="text-align: center;">GMS8K</td>
    <td style="text-align: center;">MATH</td>
  </tr>
  <tr>
    <td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td>
    <td style="text-align: center;">75.9</td>
    <td style="text-align: center;">43.6</td>
    <td style="text-align: center;">84.8</td>
    <td style="text-align: center;">55.6</td>
  </tr>
  <tr>
    <td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td>
    <td style="text-align: center;">80.2</td>
    <td style="text-align: center;">44.5</td>
    <td style="text-align: center;">86.9</td>
    <td style="text-align: center;">57.2</td>
  </tr>
  <tr>
    <td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td>
    <td style="text-align: center;">78.8</td>
    <td style="text-align: center;">45.5</td>
    <td style="text-align: center;">86.8</td>
    <td style="text-align: center;">57.6</td>
  </tr>
  <tr>
    <td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td>
    <td style="text-align: center;">80.7</td>
    <td style="text-align: center;">48.3</td>
    <td style="text-align: center;">88.0</td>
    <td style="text-align: center;">60.2</td>
  </tr>
  <tr>
    <td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td>
    <td style="text-align: center;"><b>84.7</b></td>
    <td style="text-align: center;">46.3</td>
    <td style="text-align: center;">90.1</td>
    <td style="text-align: center;">58.3</td>
  </tr>
  <tr>
    <td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td>
    <td style="text-align: center;">84.6</td>
    <td style="text-align: center;"><b>50.7</b></td>
    <td style="text-align: center;"><b>90.8</b></td>
    <td style="text-align: center;"><b>60.4</b></td>
  </tr>
</table>

The pipeline we used to produce these models is fully open-sourced!

- [Code](https://github.com/Kipok/NeMo-Skills)
- [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014)
- [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1)

## How to use the models?

Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands!

## Reproducing our results

We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results.

## Improving your own models

If you want to improve your own models or to learn more about our pipeline, read through the relevant docs below.

- [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills)
    - [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md)
    - [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md)
    - [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md)

In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/),
an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere.
It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models,
offering enterprises an easy, cost-effective, and fast way to adopt generative AI.

## Contact

E-Mail Igor Gitman at [email protected]

## Citation

If you find our work useful, please consider citing us!

TODO

## License

The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/)