File size: 3,599 Bytes
878cd6d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
afa0aed
 
 
 
878cd6d
afa0aed
 
 
 
 
 
 
878cd6d
afa0aed
 
 
 
 
878cd6d
afa0aed
 
 
878cd6d
 
afa0aed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
license: mit
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE
language:
- tr
pipeline_tag: text-generation
tags:
- phi
- nlp
- instruction-tuning
- turkish
- chat
- conversational
inference:
  parameters:
    temperature: 0.7
widget:
- messages:
  - role: user
    content: "Internet'i nasıl açıklayabilirim?"
library_name: transformers
---

# Phi-4 Turkish Instruction-Tuned Model

This model is a fine-tuned version of Microsoft's **Phi-4** model for Turkish instruction-following tasks. It was trained on a **55,000-sample Turkish instruction dataset**, making it well-suited for generating helpful and coherent responses in Turkish.

## Model Summary

|                         |                                               |
|-------------------------|-----------------------------------------------|
| **Developers**          | Baran Bingöl (Hugging Face: [barandinho](https://huggingface.co/barandinho)) |
| **Base Model**          | [microsoft/phi-4](https://huggingface.co/microsoft/phi-4)                              |
| **Architecture**        | 14B parameters, dense decoder-only Transformer|
| **Training Data**       | 55K Turkish instruction samples              |
| **Context Length**      | 16K tokens                                   |
| **License**             | MIT ([License Link](https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE)) |

## Intended Use

### Primary Use Cases
- Turkish conversational AI systems  
- Chatbots and virtual assistants  
- Educational tools for Turkish users  
- General-purpose text generation in Turkish  

### Out-of-Scope Use Cases
- High-risk domains (medical, legal, financial advice) without proper evaluation  
- Use in sensitive or safety-critical systems without safeguards  

## Usage

### Input Formats

Given the nature of the training data, `phi-4` is best suited for prompts using the chat format as follows: 

```bash
<|im_start|>system<|im_sep|>
Sen yardımsever bir yapay zekasın.<|im_end|>
<|im_start|>user<|im_sep|>
Kuantum hesaplama neden önemlidir?<|im_end|>
<|im_start|>assistant<|im_sep|>
```

### With `transformers`

Below code uses 4-bit quantization (INT4) to run the model more efficiently with lower memory usage, which is especially useful for environments with limited GPU memory like Google Colab. Keep in mind that the model will take some time to download initially.

Check [this notebook](https://colab.research.google.com/drive/113RNVTKEx-q7Lg_2V8a7HA-dJIEJiYXI?usp=sharing) for interactive usage of the model.

```python
import os
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline
import torch

model_name = "barandinho/phi4-turkish-instruct"

quant_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_use_double_quant=True)

os.makedirs("offload", exist_ok=True)

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto",
    torch_dtype=torch.float16,
    quantization_config=quant_config,
    offload_folder="offload"
)

messages = [ 
    {"role": "system", "content": "Sen yardımsever bir yapay zekasın."}, 
    {"role": "user", "content": "Kuantum hesaplama neden önemlidir, basit terimlerle açıklayabilir misin?"}, 
] 

pipe = pipeline( 
    "text-generation", 
    model=model, 
    tokenizer=tokenizer
) 

generation_args = { 
    "max_new_tokens": 500, 
    "return_full_text": False, 
    "temperature": 0.0, 
    "do_sample": False, 
} 

output = pipe(messages, **generation_args) 
print(output[0]['generated_text']) 
```