amezasor commited on
Commit
0545dfe
·
verified ·
1 Parent(s): acb9675

update after review

Browse files
Files changed (1) hide show
  1. README.md +34 -36
README.md CHANGED
@@ -203,39 +203,40 @@ model-index:
203
  value:
204
  veriefied: false
205
  ---
 
206
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) -->
 
207
 
208
  # Granite-3.0-1B-A400M-Instruct
209
 
210
- ## Model Summary
211
- **Granite-3.0-1B-A400M-Instruct** is a lightweight and open-source 1B parameter model fine tuned from *Granite-3.0-1B-A400M-Base* on a combination of open-source and proprietary instruction data with a **permissively licensed**. This language model is designed to excel in instruction following tasks such as summarization, problem-solving, text translation, reasoning, code tasks, funcion-calling, and more.
212
 
213
  - **Developers:** IBM Research
214
- - **GitHub Repository:** [ibm-granite/granite-language-models](https://github.com/ibm-granite/granite-language-models)
215
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
216
- - **Paper:** [Granite Language Models](https://) <!-- TO DO: Update github repo link when it is ready -->
217
  - **Release Date**: October 21st, 2024
218
- - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
219
 
220
- ## Supported Languages
221
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
222
 
223
- ## Usage
224
- ### Intended use
225
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.
226
 
227
- ### Capabilities
228
  * Summarization
229
  * Text classification
230
  * Text extraction
231
  * Question-answering
232
  * Retrieval Augmented Generation (RAG)
233
- * Code related
234
- * Function-calling
235
  * Multilingual dialog use cases
236
 
237
- ### Generation
238
- This is a simple example of how to use **Granite-3.0-1B-A400M-Instruct** model.
239
 
240
  Install the following libraries:
241
 
@@ -272,13 +273,8 @@ output = tokenizer.batch_decode(output)
272
  print(output)
273
  ```
274
 
275
- <!-- TO DO: function-calling-example
276
- -->
277
-
278
- <!-- ['<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>1. IBM Research - Almaden, San Jose, California<|end_of_text|>'] -->
279
-
280
- ## Model Architeture
281
- **Granite-3.0-1B-A400M-Instruct** is based on a decoder-only sparse Mixture of Experts(MoE) transformer architecture. Core components of this architecture are: Fine-grained Experts, Dropless Token Routing, and Load Balancing Loss.
282
 
283
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
284
  | :-------- | :--------| :--------| :-------- |:-------- |
@@ -298,21 +294,23 @@ print(output)
298
  | # Active Parameters | 2.5B | 8.1B | **400M** | 800M |
299
  | # Training tokens | 12T | 12T | **10T** | 10T |
300
 
301
- <!-- TO DO: To be completed once the paper is ready, we may changed title to Supervised Finetuning -->
302
- ## Training Data
303
- Granite Language Instruct models are trained on a collection of publicly available datasets with non-restrictive license, as well as an IBM collection of synthetic datasets. We annotated and filtered these datasets to only include high-quality instances from each of them in our final mixture. This dataset selection is representative of the following domains:
304
 
305
- * English datasets: [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), [WebInstructSub](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub), [OASST-OctoPack](https://huggingface.co/datasets/bigcode/oasst-octopack), [Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater), [SoftAge-Multiturn](https://huggingface.co/datasets/SoftAge-AI/multi-turn_dataset), [Glaive-RAG-v1 ](https://huggingface.co/datasets/glaiveai/RAG-v1 ), [EvolKit-20k](https://huggingface.co/datasets/arcee-ai/EvolKit-20k ), [Magpie-Phi3-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Phi3-Pro-300K-Filtered).
306
- * Multilingual datasets: [Aya Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) and IBM Synthetic datasets (e.g., Blue Multilingual, Daring Anteater Translated).
307
- * Code datasets: [Glaive Code Assistant V3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [SQL Create Context Instruction](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction), and [Self-OSS-Instruct-SC2](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k). Single and multi-turn IBM synthetic datasets, including a set of datasets generated via the evol-instruct method.
308
- * Math: [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), [StackMathQA](https://huggingface.co/datasets/math-ai/StackMathQA ), and [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
309
- * Tools: [xlam-function-calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k), [Glaive Function Calling V2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [Hermes Function Calling V1](https://huggingface.co/datasets/NousResearch/hermes-function-calling-v1), and IBM Synthetic API data.
310
- * Safety: [SimpleSafetyTests](https://huggingface.co/datasets/Bertievidgen/SimpleSafetyTests), [HarmBench Behaviors](https://github.com/centerforaisafety/HarmBench/blob/main/data/behavior_datasets/harmbench_behaviors_text_all.csv), [Strong Reject](https://github.com/alexandrasouly/strongreject/blob/main/strongreject_dataset/strongreject_dataset.csv), [AdvBench](https://huggingface.co/datasets/walledai/AdvBench), [MistralGuard](https://huggingface.co/datasets/natolambert/xstest-v2-copy), [Do-Not-Answer](https://huggingface.co/datasets/LibrAI/do-not-answer), and IBM Synthetic data for safety.
311
 
312
- ## Infrastructure
313
- We train the Granite Language models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
314
-
315
- <!-- TO DO: Check multilingual statement once the paper is ready -->
316
- ## Ethical Considerations and Limitations
317
- Granite instruct models are primarily finetuned using instruction-response pairs mostly in English, but also in German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese (Simplified). As this model has been exposed to multilingual data, it can handle multilingual dialog use cases with a limited performance in non-English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-3.0-1B-A400M-Base](https://huggingface.co/ibm-granite/granite-3.0-1b-a400m-base)* model card.
318
 
 
 
 
 
 
 
 
 
 
 
 
 
203
  value:
204
  veriefied: false
205
  ---
206
+
207
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) -->
208
+ ![image/png](granite-3_0-language-models_Group_1.png)
209
 
210
  # Granite-3.0-1B-A400M-Instruct
211
 
212
+ **Model Summary:**
213
+ Granite-3.0-1B-A400M-Instruct is an 1B parameter model finetuned from *Granite-3.0-1B-A400M-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
214
 
215
  - **Developers:** IBM Research
216
+ - **GitHub Repository:** [ibm-granite/granite-3.0-language-models](https://github.com/ibm-granite/granite-3.0-language-models)
217
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
218
+ - **Paper:** [Granite 3.0 Language Models](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/granite-3-language-models.pdf)
219
  - **Release Date**: October 21st, 2024
220
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
221
 
222
+ **Supported Languages:**
223
+ English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may fintune Granite 3.0 models for languages beyond these 12 languages.
224
 
225
+ **Intended use:**
 
226
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.
227
 
228
+ *Capabilities*
229
  * Summarization
230
  * Text classification
231
  * Text extraction
232
  * Question-answering
233
  * Retrieval Augmented Generation (RAG)
234
+ * Code related tasks
235
+ * Function-calling tasks
236
  * Multilingual dialog use cases
237
 
238
+ **Generation:**
239
+ This is a simple example of how to use Granite-3.0-1B-A400M-Instruct model.
240
 
241
  Install the following libraries:
242
 
 
273
  print(output)
274
  ```
275
 
276
+ **Model Architeture:**
277
+ Granite-3.0-1B-A400M-Instruct is based on a decoder-only sparse Mixture of Experts(MoE) transformer architecture. Core components of this architecture are: Fine-grained Experts, Dropless Token Routing, and Load Balancing Loss.
 
 
 
 
 
278
 
279
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
280
  | :-------- | :--------| :--------| :-------- |:-------- |
 
294
  | # Active Parameters | 2.5B | 8.1B | **400M** | 800M |
295
  | # Training tokens | 12T | 12T | **10T** | 10T |
296
 
297
+ **Training Data:**
298
+ Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) very small amounts of human-curated data. Please refer to [Granite 3.0 Language Models technical report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/granite-3-language-models.pdf) for more details on the individual categories and datasets.
 
299
 
300
+ **Infrastructure:**
301
+ We train Granite 3.0 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
 
 
 
 
302
 
303
+ **Ethical Considerations and Limitations:**
304
+ Granite 3.0 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.
 
 
 
 
305
 
306
+ <!-- ## Citation
307
+ ```
308
+ @misc{granite-models,
309
+ author = {author 1, author2, ...},
310
+ title = {},
311
+ journal = {},
312
+ volume = {},
313
+ year = {2024},
314
+ url = {https://arxiv.org/abs/0000.00000},
315
+ }
316
+ ``` -->