sagorsarker commited on
Commit
8f0f5fa
·
verified ·
1 Parent(s): f88b157

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -39
README.md CHANGED
@@ -1,61 +1,122 @@
1
  ---
 
 
2
  library_name: transformers
3
- license: llama3.2
4
- base_model: meta-llama/Llama-3.2-3B
5
  tags:
 
 
 
 
 
6
  - llama-factory
7
- - full
8
- - generated_from_trainer
9
- model-index:
10
- - name: llama-3.2-3B-4096-sample-1-33GB
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- # llama-3.2-3B-4096-sample-1-33GB
18
 
19
- This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the sample_1 dataset.
20
 
21
- ## Model description
22
 
23
- More information needed
 
 
 
 
 
 
 
 
24
 
25
- ## Intended uses & limitations
26
 
27
- More information needed
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
 
32
 
33
- ## Training procedure
 
 
 
 
 
 
34
 
35
- ### Training hyperparameters
 
 
 
 
 
36
 
37
- The following hyperparameters were used during training:
38
- - learning_rate: 4e-05
39
- - train_batch_size: 7
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - distributed_type: multi-GPU
43
- - num_devices: 8
44
- - gradient_accumulation_steps: 8
45
- - total_train_batch_size: 448
46
- - total_eval_batch_size: 64
47
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
- - lr_scheduler_type: cosine
49
- - lr_scheduler_warmup_ratio: 0.01
50
- - num_epochs: 1.0
51
 
52
- ### Training results
 
 
 
 
 
 
53
 
 
54
 
 
55
 
56
- ### Framework versions
57
 
58
- - Transformers 4.44.2
59
- - Pytorch 2.4.1+cu121
60
- - Datasets 2.21.0
61
- - Tokenizers 0.19.1
 
1
  ---
2
+ language:
3
+ - bn
4
  library_name: transformers
5
+ pipeline_tag: text-generation
 
6
  tags:
7
+ - hishab
8
+ - titulm
9
+ - pytorch
10
+ - llama
11
+ - llama-3
12
  - llama-factory
13
+ license: llama3.2
14
+ base_model:
15
+ - meta-llama/Llama-3.2-3B
 
 
16
  ---
17
 
18
+ ## Model Information
19
+
20
+ This model is a continually pretrained version of the [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) architecture, fine-tuned on extensive Bangla datasets. The primary goal of the continual pretraining was to enhance the model's ability to generate high-quality Bangla text. By extending the pretraining process specifically on Bangla data, the model has demonstrated superior performance in tasks related to Bangla language understanding evaluation benchmarks and text generation.
21
+
22
+ **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture.
23
+
24
+ | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
25
+ | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
26
+ | Llama 3.2 (text only) | Hishab curated Bangla text corpus | 3B(3.21B) | Monolingual Text(Bangla) | Monolingual Text(Bangla) | 4096 | Yes | Yes | 8.5B tokens | |
27
+
28
+ **Supported Languages:** Bengali(primary) and English(secondary)
29
+
30
+ **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
31
+
32
+ **Model Release Date:** October 24, 2024
33
+
34
+ **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities.
35
+
36
+ **License:** We are using the similar license of Llama 3.2. Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
37
+
38
+
39
+ ## How to use
40
+ - Use with transformers
41
+
42
+ Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
43
+
44
+ Make sure to update your transformers installation via pip install --upgrade transformers.
45
+
46
+ ```python
47
+ import torch
48
+ from transformers import pipeline
49
+
50
+ model_id = "hishab/titulm-llama-3.2-3b-v1.1"
51
+
52
+ pipe = pipeline(
53
+ "text-generation",
54
+ model=model_id,
55
+ torch_dtype=torch.bfloat16,
56
+ device_map="auto"
57
+ )
58
+
59
+ pipe("আমাদের দেশের নাম")
60
+ ```
61
+
62
+ ## Hardware and Software
63
+
64
+ **Training Factors:** We used [llama-factory](https://github.com/hiyouga/LLaMA-Factory) training library, Cloud GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on cloud infrastructure.
65
 
 
66
 
67
+ ## Training Data
68
 
69
+ **Overview:** We have collected a large Bangla raw dataset of text data from a wide variety of sources. Our collected data so far includes a mix of web documents, books, translated text, transliterated text, transcribe text, code-mixed text, conversations, and open sources raw data. The dataset is cleaned and filtered by different filtering criteria to ensure the quality of the data. Our collected data size roughly around 268 GB. Total trained tokens are 37B tokens.
70
 
71
+ Data sources summary:
72
+ - Web documents: Extract, clean, filter common crawl data
73
+ - Books: Extract, clean, filter books data
74
+ - Transcribed text: Used in-house Bangla ASR model to transcribe Bangla audio data
75
+ - Translation data: We trained a Bangla-English translation LLM model and used it to translate English data to Bangla
76
+ - Code-mixed data: We trained a Bangla-English code-mixed LLM model and used it to generate code-mixed data
77
+ - Transliteration data: We trained a Bangla-English transliteration LLM model and used it to generate transliterated data
78
+ - Synthetic data: We generated synthetic data using a Bangla LLM model
79
+ - Others: We scrap some selected websites data, used open-sources data, and used some other data sources
80
 
 
81
 
82
+ ## Benchmarks \- Bangla Text
83
 
84
+ In this section, we report the results for __titulm-llama-3.2-3b-v1.1__ models on standard automatic benchmarks. For all these evaluations, we used [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) evaluations library.
85
 
86
+ ### Evaluation Datasets
87
+ We evaluated our pretrained models on both Bangla and English benchmark datasets. Although the model is trained on Bangla data, it's English capability is also evaluated on English benchmark datasets. The evaluation datasets are as follows:
88
 
89
+ #### Bangla Benchmark datasets
90
+ We evaluated the models on the following datasets:
91
+ - [Bangla MMLU](): A privated multiple choice questions datasets developed by Hishab curated from various sources.
92
+ - [CommonsenseQa Bangla](https://huggingface.co/datasets/hishab/commonsenseqa-bn): A Bangla translation of the CommonsenseQA dataset. The dataset was translated using a new method called Expressive Semantic Translation (EST), which combines Google Machine Translation with LLM-based rewriting modifications.
93
+ - [OpenbookQA Bangla](https://huggingface.co/datasets/hishab/openbookqa-bn): A Bangla translation of the OpenbookQA dataset. The dataset was translated using a new method called Expressive Semantic Translation (EST), which combines Google Machine Translation with LLM-based rewriting modifications.
94
+ - [Piqa Bangla](https://huggingface.co/datasets/hishab/piqa-bn): A Bangla translation of the Piqa dataset. The dataset was translated using a new method called Expressive Semantic Translation (EST), which combines Google Machine Translation with LLM-based rewriting modifications.
95
+ - [BoolQ Bangla](https://huggingface.co/datasets/hishab/boolq_bn): The dataset contains 15,942 examples, with each entry consisting of a triplet: (question, passage, answer). The questions are naturally occurring, generated from unprompted and unconstrained settings. Input passages were sourced from Bangla Wikipedia, Banglapedia, and News Articles, and GPT-4 was used to generate corresponding yes/no questions with answers.
96
 
97
+ #### English Benchmark datasets
98
+ - [MMLU](https://huggingface.co/datasets/cais/mmlu): This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge.
99
+ - [CommonseQa](https://huggingface.co/datasets/tau/commonsense_qa): CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers .
100
+ - [OpenbookQA](https://huggingface.co/datasets/allenai/openbookqa): OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in.
101
+ - [Piqa](https://huggingface.co/datasets/ybisk/piqa): The PIQA dataset focuses on physical commonsense reasoning, challenging AI to handle everyday situations requiring practical knowledge and unconventional solutions. Inspired by instructables.com, it aims to enhance AI's ability to understand and reason about physical interactions.
102
+ - [BoolQ](https://huggingface.co/datasets/google/boolq): BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally occurring ---they are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks.
103
 
104
+ ### Evaluation Results
 
 
 
 
 
 
 
 
 
 
 
 
 
105
 
106
+ #### Evaluation on Bangla Benchmark datasets
107
+ | Model | Shots | Bangla MMLU | BoolQ BN | Commonsense QA BN | OpenBook QA BN | PIQA BN |
108
+ |--------------------------------------|--------|-------------|----------|-------------------|----------------|---------|
109
+ | llama-3.2-3b | 0-shot | **0.36** | 0.55 | 0.26 | 0.31 | 0.56 |
110
+ | | 5-shot | 0.38 | - | 0.29 | 0.32 | 0.58 |
111
+ | hishab/titulm-llama-3.2-3b-v1.1 | 0-shot | 0.35 | **0.66** | **0.31** | **0.37** | **0.62**|
112
+ | | 5-shot | **0.40** | - | **0.40** | **0.37** | **0.63**|
113
 
114
+ #### Evaluation on English Benchmark datasets
115
 
116
+ ### Instruction Tuned Models
117
 
 
118
 
119
+ ### Intended Use
120
+ - Bangla text generation
121
+ - Bangla language understanding tasks
122
+ - Bangla instruction fine-tuning tasks