Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,28 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
---
|
4 |
+
|
5 |
+
# QWEN2.5-32B-2600s-FP8: Advanced Multilingual Translation Model
|
6 |
+
|
7 |
+
## Overview
|
8 |
+
**Imran1/QWEN2.5-32B-Translation** is a fine-tuned version of Qwen 2.5 32B, specifically optimized for multilingual translation across **16 different languages**. This model has been extensively fine-tuned to enhance its translation capabilities, making it competitive with high-tier models like 72B in terms of translation accuracy and fluency.
|
9 |
+
|
10 |
+
## Fine-Tuning Process
|
11 |
+
### Data Collection
|
12 |
+
To improve the model's understanding and translation capabilities, we curated and synthesized a large dataset consisting of:
|
13 |
+
- High-quality multilingual conversational datasets.
|
14 |
+
- Real-world dialogues spanning general, business, and technical domains.
|
15 |
+
- Translated datasets covering diverse linguistic structures and idiomatic expressions.
|
16 |
+
|
17 |
+
### Multilingual Enhancement
|
18 |
+
To advance its translation capabilities, we leveraged:
|
19 |
+
- **Translation Expansion**: The collected dataset was translated into **16 different languages** to ensure robust multilingual performance.
|
20 |
+
- **Benchmarking Against High-Tier Models**: We utilized state-of-the-art translation models, including **Gemini** and other top-ranking translation models with high BLEU and COMET scores, to refine our translation quality.
|
21 |
+
- **Reinforcement Learning with Human Feedback (RLHF)**: Translation outputs were evaluated and iteratively improved based on feedback from native speakers and linguistic experts.
|
22 |
+
|
23 |
+
### Training and Optimization
|
24 |
+
- **Base Model**: Qwen 2.5 32B FP8
|
25 |
+
- **Fine-Tuning Framework**: LoRA + QLoRA for efficient training
|
26 |
+
- **Batch Size**: Optimized for multi-GPU environments
|
27 |
+
- **Precision**: FP8 for efficient computation without sacrificing performance
|
28 |
+
- **Training Iterations**: Over 2600 steps on **multi-H100 GPUs**
|