File size: 3,350 Bytes
aaed434
 
 
 
 
 
 
 
 
 
 
80343c6
aaed434
 
80343c6
cdb4bde
80343c6
cdb4bde
aaed434
 
 
1682c3d
a81791d
80343c6
aaed434
cdb4bde
aaed434
80343c6
cdb4bde
80343c6
f9f05e9
80343c6
 
205034a
 
 
80343c6
cdb4bde
205034a
 
 
 
 
 
 
 
 
 
80343c6
205034a
 
 
 
 
 
 
 
 
 
 
 
 
80343c6
205034a
883f31c
205034a
80343c6
205034a
80343c6
 
 
bba0373
4482233
a610b5d
80343c6
 
 
 
 
 
cdb4bde
8114ac7
3891ea9
80343c6
 
 
81ba544
80343c6
8a5a6c0
 
 
80343c6
 
 
c870213
80343c6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
- ur
---


# Model Card for Alif 1.0 8B Instruct

**Alif 1.0 8B Instruct** is an open-source model with highly advanced multilingual reasoning capabilities. It utilizes human refined multilingual synthetic data paired with reasoning to enhance cultural nuance and reasoning capabilities in english and urdu languages.

- **Developed by:** large-traversaal
- **License:** apache-2.0
- **Base model:** unsloth/Meta-Llama-3.1-8B
- **Model:** Alif-1.0-8B-Instruct
- **Model Size:** 8 billion parameters

This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.


### How to Use Alif 1.0 8B Instruct

Install the transformers, bitsandbytes libraries and load Alif 1.0 8B Instruct as follows:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
from transformers import BitsAndBytesConfig

model_id = "large-traversaal/Alif-1.0-8B-Instruct"

# 4-bit quantization configuration
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4"
)

# Load tokenizer and model in 4-bit
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=quantization_config,
    device_map="auto"
)

# Create text generation pipeline
chatbot = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map="auto")

# Function to chat
def chat(message):
    response = chatbot(message, max_new_tokens=100, do_sample=True, temperature=0.3)
    return response[0]["generated_text"]

# Example chat
user_input = "شہر کراچی کی کیا اہمیت ہے؟"
bot_response = chat(user_input)

print(bot_response)

```

You can also try out this model using [TextStreamer](https://colab.research.google.com/drive/1mEPynC__uN2tKDvDho3f6MpcKW-GMiAh?usp=sharing) or [Gradio](https://colab.research.google.com/drive/1DUwlYBOMUd7FZaI631-y6y8fTNiy0pqt?usp=sharing) in Colab. It is also available in GGUF with various quantized formats for Ollama, LM Studio, Jan, and Llama.cpp.


## Model Details

**Input**: Models input text only.

**Output**: Models generate text only.

**Model Architecture**: Alif 1.0 8B Instruct is an auto-regressive language model that uses an optimized transformer architecture. Post-training includes continuous pretraining and supervised finetuning.

For more details about how the model was trained, check out [our blogpost](https://blog.traversaal.ai/announcing-alif-1-0-our-first-urdu-llm-outperforming-other-open-source-llms/).


### Evaluation
We evaluated Alif 1.0 8B Instruct against Gemma 2 9B, Llama 3.1 8B, Mistral Nemo 12B, Qwen 2.5 7B and Cohere Aya Expanse 8B using the human annotated Urdu evaluation dataset and scores are determined using gpt-4o as a judge.

<img src="result1.jpg" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>

<img src="result2.jpg" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>

### Model Card Contact

For errors or additional questions about details in this model card, contact: [email protected]