Duino commited on
Commit
47de099
·
verified ·
1 Parent(s): 4168eff

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language_model: true
3
+ license: apache-2.0
4
+ tags:
5
+ - text-generation
6
+ - language-modeling
7
+ - Multilingual
8
+ - pytorch
9
+ - transformers
10
+ datasets:
11
+ - wikimedia/wikipedia
12
+ metrics:
13
+ - cross_entropy_loss
14
+ language:
15
+ - ary
16
+ ---
17
+
18
+ # Darija-GPT: Small Multilingual Language Model (Darija Arabic)
19
+
20
+ ## Model Description
21
+
22
+ This is a small multilingual language model based on a Transformer architecture (GPT-like). It is trained from scratch on a subset of Wikipedia data in the **ary** language for demonstration and experimentation.
23
+
24
+ ### Architecture
25
+
26
+ - Transformer-based language model (Decoder-only).
27
+ - Reduced model dimensions (`n_embd=768`, `n_head=12`, `n_layer=12`) for faster training and smaller model size, making it suitable for resource-constrained environments.
28
+ - Uses Byte-Pair Encoding (BPE) tokenizer trained on the same Wikipedia data.
29
+
30
+ ### Training Data
31
+
32
+ - Trained on a Wikipedia subset in the following language:
33
+ - ary
34
+ - The dataset is prepared and encoded to be efficient for training smaller models.
35
+
36
+ ### Limitations
37
+
38
+ - **Small Model:** Parameter count is limited to approximately 30 million, resulting in reduced capacity compared to larger models.
39
+ - **Limited Training Data:** Trained on a subset of Wikipedia, which is relatively small compared to massive datasets used for state-of-the-art models.
40
+ - **Not State-of-the-Art:** Performance is not expected to be cutting-edge due to size and data limitations.
41
+ - **Potential Biases:** May exhibit biases from the Wikipedia training data and may not generalize perfectly to all Darija dialects or real-world text.
42
+
43
+ ## Intended Use
44
+
45
+ - Primarily for **research and educational purposes**.
46
+ - Demonstrating **language modeling in ary**.
47
+ - As a **starting point** for further experimentation in low-resource NLP, model compression, or fine-tuning on specific Darija tasks.
48
+ - For **non-commercial use** only.
49
+
50
+ ## How to Use
51
+
52
+ You can use this model with the `transformers` library from Hugging Face. Make sure you have `transformers` installed (`pip install transformers`).
53
+
54
+ ```python
55
+ from transformers import AutoTokenizer, AutoModelForCausalLM
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained("Duino/Darija-GPT")
58
+ model = AutoModelForCausalLM.from_pretrained("Duino/Darija-GPT")
59
+
60
+ prompt_text = "هذا نموذج لغوي صغير" # Example prompt in Arabic/Darija
61
+ input_ids = tokenizer.encode(prompt_text, return_tensors="pt").to(model.device)
62
+
63
+ # Generate text (adjust max_length, temperature, top_p as needed)
64
+ output = model.generate(input_ids, max_new_tokens=50, temperature=0.9, top_p=0.9)
65
+
66
+ generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
67
+ print("Prompt:", prompt_text)
68
+ print("Generated text:", generated_text)
69
+ ```
70
+
71
+ ## Training Plot
72
+
73
+ ![Training Plot](plots/training_plot.png)
74
+
75
+ This plot shows the training and validation loss curves over epochs.
76
+
77
+ ## Intended Use
78
+
79
+ This model is primarily intended for research and educational purposes to demonstrate language modeling, especially in low-resource languages like Darija Arabic.
80
+
81
+ ## Limitations
82
+
83
+ Please be aware of the limitations due to the small model size and limited training data, as detailed in the Model Description.