---
license: llama3.1
language:
- en
base_model:
- nvidia/OpenMath2-Llama3.1-8B
pipeline_tag: text-generation
tags:
- math
- nvidia
- llama
---

## GGUF quantized version of OpenMath2-Llama3.1-8B

project original [source](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B) (base model)

Q_2_K (not nice)

Q_3_K_S (acceptable)

Q_3_K_M is acceptable (good for running with CPU)

Q_3_K_L (acceptable)

Q_4_K_S (okay)

Q_4_K_M is recommanded (balance)

Q_5_K_S (good)

Q_5_K_M (good in general)

Q_6_K is good also; if you want a better result; take this one instead of Q_5_K_M

Q_8_0 which is very good; need a reasonable size of RAM otherwise you might expect a long wait

f16 is similar to the original hf model; opt this one or hf also fine; make sure you have a good machine

### how to run it

use any connector for interacting with gguf; i.e., [gguf-connector](https://pypi.org/project/gguf-connector/)

  <style>
      .image-container {
          display: flex;
          justify-content: center;
          align-items: center;
          gap: 20px;
      }
      .image-container img {
          width: 350px;
          height: auto;
      }
  </style>

<div class="image-container">
    <img src="https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B/resolve/main/scaling_plot.jpg" title="Performance of Llama-3.1-8B-Instruct as it is trained on increasing proportions of OpenMathInstruct-2">
    <img src="https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B/resolve/main/math_level_comp.jpg" title="Comparison of OpenMath2-Llama3.1-8B vs. Llama-3.1-8B-Instruct across MATH levels">
</div>

the chart and figure above are from base model (nvidia side)