File size: 1,800 Bytes
f027917
 
25621ed
 
bb5f5a5
25621ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
license: mit
---

# Imran1/QWEN2.5-32B-Translation: Advanced Multilingual Translation Model

## Overview
**Imran1/QWEN2.5-32B-Translation** is a fine-tuned version of Qwen 2.5 32B, specifically optimized for multilingual translation across **16 different languages**. This model has been extensively fine-tuned to enhance its translation capabilities, making it competitive with high-tier models like 72B in terms of translation accuracy and fluency.

## Fine-Tuning Process
### Data Collection
To improve the model's understanding and translation capabilities, we curated and synthesized a large dataset consisting of:
- High-quality multilingual conversational datasets.
- Real-world dialogues spanning general, business, and technical domains.
- Translated datasets covering diverse linguistic structures and idiomatic expressions.

### Multilingual Enhancement
To advance its translation capabilities, we leveraged:
- **Translation Expansion**: The collected dataset was translated into **16 different languages** to ensure robust multilingual performance.
- **Benchmarking Against High-Tier Models**: We utilized state-of-the-art translation models, including **Gemini** and other top-ranking translation models with high BLEU and COMET scores, to refine our translation quality.
- **Reinforcement Learning with Human Feedback (RLHF)**: Translation outputs were evaluated and iteratively improved based on feedback from native speakers and linguistic experts.

### Training and Optimization
- **Base Model**: Qwen 2.5 32B FP8
- **Fine-Tuning Framework**: LoRA + QLoRA for efficient training
- **Batch Size**: Optimized for multi-GPU environments
- **Precision**: FP8 for efficient computation without sacrificing performance
- **Training Iterations**: Over 2600 steps on **multi-H100 GPUs**