Update README.md
Browse files
README.md
CHANGED
|
@@ -1,7 +1,14 @@
|
|
| 1 |
-
---
|
| 2 |
-
library_name: transformers
|
| 3 |
-
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
|
| 6 |
|
| 7 |

|
|
@@ -10,15 +17,15 @@ tags: []
|
|
| 10 |
|
| 11 |
Multimodal LLM x Reasoning Model 👀 🧠 🔍
|
| 12 |
|
| 13 |
-
After more than six months since creating the 5CD-AI/LLaVA-CoT-o1-Instruct dataset—one of Hugging Face’s most liked datasets of 2024 🎉—we have just completed the "base" version of the Vintern Reasoning Model!
|
| 14 |
- This model can perform long and complex reasoning based on images, breaking down each reasoning step into multiple sub-steps while keeping hallucinations under control.
|
| 15 |
🛠️ We also successfully implemented the GRPO algorithm on the Vintern Multimodel model!
|
| 16 |
- Despite the difficulty of balancing multiple tasks alongside reasoning, Vintern-3B-R-beta has outperformed all previous versions across various benchmarks!
|
| 17 |
|
| 18 |
-
When should you choose Vintern-1B-
|
| 19 |
|
| 20 |
-
- **Vintern-1B-
|
| 21 |
-
- **Vintern-3B-R-beta**: Better for complex questions and structured
|
| 22 |
|
| 23 |
🚀 The next step? Training and enhancing its reasoning ability by Reinforcement Learning!
|
| 24 |
|
|
@@ -286,5 +293,4 @@ print(f'User: {question}\nAssistant: {response}')
|
|
| 286 |
|
| 287 |
## Reference
|
| 288 |
|
| 289 |
-
[1] Z. Chen et al., ‘Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling’, arXiv preprint arXiv:2412. 05271, 2024.
|
| 290 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
license: mit
|
| 4 |
+
datasets:
|
| 5 |
+
- 5CD-AI/LLaVA-CoT-o1-Instruct
|
| 6 |
+
language:
|
| 7 |
+
- vi
|
| 8 |
+
- en
|
| 9 |
+
- zh
|
| 10 |
+
pipeline_tag: image-text-to-text
|
| 11 |
+
---
|
| 12 |
|
| 13 |
|
| 14 |

|
|
|
|
| 17 |
|
| 18 |
Multimodal LLM x Reasoning Model 👀 🧠 🔍
|
| 19 |
|
| 20 |
+
After more than six months since creating the [5CD-AI/LLaVA-CoT-o1-Instruct](https://huggingface.co/datasets/5CD-AI/LLaVA-CoT-o1-Instruct) dataset—one of Hugging Face’s most liked datasets of 2024 🎉—we have just completed the "base" version of the Vintern Reasoning Model!
|
| 21 |
- This model can perform long and complex reasoning based on images, breaking down each reasoning step into multiple sub-steps while keeping hallucinations under control.
|
| 22 |
🛠️ We also successfully implemented the GRPO algorithm on the Vintern Multimodel model!
|
| 23 |
- Despite the difficulty of balancing multiple tasks alongside reasoning, Vintern-3B-R-beta has outperformed all previous versions across various benchmarks!
|
| 24 |
|
| 25 |
+
When should you choose [Vintern-1B-v3_5](https://huggingface.co/5CD-AI/Vintern-1B-v3_5) vs Vintern-3B-R-beta? 🤔
|
| 26 |
|
| 27 |
+
- **Vintern-1B-v3_5**: Fast ⚡ and good for Vietnamese OCR with simple text formatting. 📝 Highly reliable. ✅
|
| 28 |
+
- **Vintern-3B-R-beta**: Better for complex questions and complex structured doc image. 🔍📚 OCR performance on blurred or unclear text may be slightly affected due to our training focus on reasoning. 🔍🤖
|
| 29 |
|
| 30 |
🚀 The next step? Training and enhancing its reasoning ability by Reinforcement Learning!
|
| 31 |
|
|
|
|
| 293 |
|
| 294 |
## Reference
|
| 295 |
|
| 296 |
+
[1] Z. Chen et al., ‘Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling’, arXiv preprint arXiv:2412. 05271, 2024.
|
|
|