Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ max_seq_length = 600 # Choose any! We auto support RoPE Scaling internally!
|
|
34 |
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
|
35 |
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
|
36 |
model, tokenizer = FastLanguageModel.from_pretrained(
|
37 |
-
model_name = "Wajdi1976/
|
38 |
max_seq_length = max_seq_length,
|
39 |
dtype = dtype,
|
40 |
load_in_4bit = load_in_4bit,
|
@@ -75,6 +75,19 @@ print(get_response(text))
|
|
75 |
```
|
76 |
- Response: ุจุงูุทุจุน ูุฌู
ูุฌุงูุจ ุนูู ุณุคุงูู ูุงูููุฌู ุงูุชููุณูุฉ ูู ููุน ู
ู ุงูููุฌุงุช ุงููู ุชุชููู
ูู ุจูุงุฏ ุงุณู
ูุง ุชููุณ ูููู
ุง ูู
ุง ุจุฑุดุง ููุฌุงุช ู
ุฎุชููุฉ ููู
ุง ุงูุฅูุฌููุฒูุฉ ุฃู ุงูุฅุณุจุงููุฉ ุงููุงุณ ูู ุงูุนุงูู
ูุชููู
ูุง ูุบุงุช ู
ุชููุนุฉ ุฃู
ุง ุงูููุฌุฉ ุงูุชููุณู ููุง ุงูุทุฑููุฉ ุงูุฎุงุตุฉ ุจุงูููุงู
ูููุงุณ ูู ุงูุจูุงุฏ ูุฐูู ูุนูู ูุงู ุชุณุฃููู ุณุคุงู ุจุงููุฌุฌู ุชููุณู ูุญุจ ูุนุงููู ุจุงุด ูููู
ูู
|
77 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
79 |
|
80 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
34 |
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
|
35 |
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
|
36 |
model, tokenizer = FastLanguageModel.from_pretrained(
|
37 |
+
model_name = "Wajdi1976/Labess",
|
38 |
max_seq_length = max_seq_length,
|
39 |
dtype = dtype,
|
40 |
load_in_4bit = load_in_4bit,
|
|
|
75 |
```
|
76 |
- Response: ุจุงูุทุจุน ูุฌู
ูุฌุงูุจ ุนูู ุณุคุงูู ูุงูููุฌู ุงูุชููุณูุฉ ูู ููุน ู
ู ุงูููุฌุงุช ุงููู ุชุชููู
ูู ุจูุงุฏ ุงุณู
ูุง ุชููุณ ูููู
ุง ูู
ุง ุจุฑุดุง ููุฌุงุช ู
ุฎุชููุฉ ููู
ุง ุงูุฅูุฌููุฒูุฉ ุฃู ุงูุฅุณุจุงููุฉ ุงููุงุณ ูู ุงูุนุงูู
ูุชููู
ูุง ูุบุงุช ู
ุชููุนุฉ ุฃู
ุง ุงูููุฌุฉ ุงูุชููุณู ููุง ุงูุทุฑููุฉ ุงูุฎุงุตุฉ ุจุงูููุงู
ูููุงุณ ูู ุงูุจูุงุฏ ูุฐูู ูุนูู ูุงู ุชุณุฃููู ุณุคุงู ุจุงููุฌุฌู ุชููุณู ูุญุจ ูุนุงููู ุจุงุด ูููู
ูู
|
77 |
|
78 |
+
## Citations
|
79 |
+
When using the **Tunisian Derja Dataset** dataset, please cite:
|
80 |
+
|
81 |
+
```bibtex
|
82 |
+
@model{linagora2025LLM-tn,
|
83 |
+
author = {Wajdi Ghezaiel and Jean-Pierre Lorrรฉ},
|
84 |
+
title = {Labess:Tunisian Derja LLM},
|
85 |
+
year = {2025},
|
86 |
+
month = {January},
|
87 |
+
url = {https://huggingface.co/datasets/Wajdi1976/Labess}
|
88 |
+
}
|
89 |
+
|
90 |
+
```
|
91 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
92 |
|
93 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|