File size: 3,662 Bytes
d3d514a
 
 
6d11f40
d3d514a
 
 
 
 
 
 
6d11f40
0769a66
6d11f40
d3d514a
779d6a9
d3d514a
5d26c21
d3d514a
 
a2d3c54
d3d514a
59cc255
6d11f40
 
 
 
 
 
 
 
 
 
dce86fe
6d11f40
 
 
63eb912
6d11f40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53105ca
6d11f40
dce86fe
6d11f40
 
 
 
 
 
 
 
 
 
 
ff229cc
6d11f40
 
 
dce86fe
2ca1d1d
0bddeb4
2ca1d1d
 
 
 
a2d3c54
2ca1d1d
 
a2d3c54
2ca1d1d
 
 
d3d514a
 
6d11f40
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
base_model: inceptionai/jais-adapted-7b-chat
language:
- ar
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
datasets:
- linagora/Tunisian_Derja_Dataset
library_name: transformers
---
## Model Overview

Labess-7b-chat is an open model instruction-tuned for Tunisian Derja, it's a continual pre-training version of jais-adapted-7b-chat with tunisian_Derja_Dataset
# Uploaded  model

- **Developed by:** Linagora
- **License:** apache-2.0
- **Finetuned from model :** inceptionai/jais-adapted-7b-chat  
## Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:

```sh
pip install unsloth
```
### First, Load the Model
```python
from unsloth import FastLanguageModel
import torch
max_seq_length = 128 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "linagora/Labess-7b-chat",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,    
)
```

### Second, Try the model 
```python
prompt_ar=" يمكنك الإجابة باللهجة التونسية فقط.\n\nأكمل المحادثة أدناه بين [|Human|] و [|AI|]:\n### Input: [|Human|] {Question}\n### Response: [|AI|]"
device = "cuda" if torch.cuda.is_available() else "cpu"
FastLanguageModel.for_inference(model)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token
def get_response(text, tokenizer=tokenizer, model=model):
    tokenized = tokenizer(text, return_tensors="pt")
    input_ids, attention_mask = tokenized['input_ids'].to(device), tokenized['attention_mask'].to(device)
    input_len = input_ids.shape[-1]
    generate_ids = model.generate(
        input_ids,
        attention_mask=attention_mask,
        top_p=0.9,
        temperature=0.3,
        max_length=128,
        min_length=input_len + 4,
        repetition_penalty=1.2,
        do_sample=True,
        pad_token_id=tokenizer.pad_token_id
    )
    response = tokenizer.batch_decode(
        generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
    )[0]
    response = response.split("### Response :")[-1].lstrip()
    return response

ques = "  آش نقصدو كي نقولو لاباس"
text = prompt_ar.format_map({'Question': ques})
print(get_response(text))
```
- Response:  لا باس معناها اللي الشخص موشي في مشكلة ولا مش مرتاح من الموضوع كيفاش نجم نعاونك باش تفهمو خير كان عندك تفاصيل أكثر على الوضعية والا السؤال متاعك تحب نساعدك بشوية سؤال آخر توة نهارك زين شكرا برشا عالمساعدة متاعيمحبت نقلب حاجة أخرى برك الله يباركفي هالمحادثة استعمل
## Citations
When using this model **Labess-7b-chat**, please cite:

```bibtex
@model{linagora2025LLM-tn,
  author = {Wajdi Ghezaiel and Jean-Pierre Lorré},
  title = {Labess-7b-chat:Tunisian Derja LLM},
  year = {2025},
  month = {January},  
  url = {https://huggingface.co/datasets/Wajdi1976/Labess-7b-chat}
}

```
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)