File size: 803 Bytes
9fdfac7 30fe573 9fdfac7 30fe573 9fdfac7 30fe573 9fdfac7 30fe573 9fdfac7 30fe573 9fdfac7 30fe573 9fdfac7 30fe573 9fdfac7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
language:
- en
- id
tags:
- qwen
- code
- merged
- optimized
pipeline_tag: text-generation
license: apache-2.0
---
# Qwen2.5 Coder 1.5B Instruct Merged (Optimized)
This is an optimized merged version of the fine-tuned Qwen2.5 Coder model. It combines:
- Base model: Qwen/Qwen2.5-Coder-1.5B-Instruct
- Fine-tuned adapter: iamgiven/Qwen2.5-Coder-1.5B-Instruct-cpp-lora
The model has been optimized using float16 precision and efficient serialization.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"{full_repo_name}",
trust_remote_code=True,
torch_dtype=torch.float16 # Use float16 for efficiency
)
tokenizer = AutoTokenizer.from_pretrained("{full_repo_name}", trust_remote_code=True)
```
|