Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,8 @@ library_name: transformers
|
|
5 |
license: apache-2.0
|
6 |
---
|
7 |
|
|
|
|
|
8 |
**Arcee-Maestro-7B-Preview (7B)** is Arcee's first reasoning model trained with reinforment learning. It is based on the Qwen2.5-7B DeepSeek-R1 distillation **DeepSeek-R1-Distill-Qwen-7B** with further GPRO training. Though this is just a preview of our upcoming work, it already shows promising improvements to mathematical and coding abilities across a range of tasks.
|
9 |
|
10 |
### Model Details
|
|
|
5 |
license: apache-2.0
|
6 |
---
|
7 |
|
8 |
+
### AWQ Quantization of Arcee-Maestro-7B-Preview
|
9 |
+
|
10 |
**Arcee-Maestro-7B-Preview (7B)** is Arcee's first reasoning model trained with reinforment learning. It is based on the Qwen2.5-7B DeepSeek-R1 distillation **DeepSeek-R1-Distill-Qwen-7B** with further GPRO training. Though this is just a preview of our upcoming work, it already shows promising improvements to mathematical and coding abilities across a range of tasks.
|
11 |
|
12 |
### Model Details
|