doberst commited on
Commit
25bd469
·
1 Parent(s): ca1b358

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -3
README.md CHANGED
@@ -20,7 +20,7 @@ without using any advanced quantization optimizations.
20
  - **Developed by:** llmware
21
  - **Model type:** GPTNeoX instruct-trained decoder
22
  - **Language(s) (NLP):** English
23
- - **License:** Apache 2.0
24
  - **Finetuned from model:** stabilityai/stablelm-3b-4e1t
25
 
26
 
@@ -63,8 +63,8 @@ Any model can provide inaccurate or incomplete information, and should be used i
63
  The fastest way to get started with BLING is through direct import in transformers:
64
 
65
  from transformers import AutoTokenizer, AutoModelForCausalLM
66
- tokenizer = AutoTokenizer.from_pretrained("bling-red-pajamas-3b-0.1")
67
- model = AutoModelForCausalLM.from_pretrained("bling-red-pajamas-3b-0.1")
68
 
69
 
70
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
@@ -81,6 +81,16 @@ To get the best results, package "my_prompt" as follows:
81
  my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
82
 
83
 
 
 
 
 
 
 
 
 
 
 
84
 
85
  ## Model Card Contact
86
 
 
20
  - **Developed by:** llmware
21
  - **Model type:** GPTNeoX instruct-trained decoder
22
  - **Language(s) (NLP):** English
23
+ - **License:** CC BY-SA-4.0 [https://creativecommons.org/licenses/by-sa/4.0/]
24
  - **Finetuned from model:** stabilityai/stablelm-3b-4e1t
25
 
26
 
 
63
  The fastest way to get started with BLING is through direct import in transformers:
64
 
65
  from transformers import AutoTokenizer, AutoModelForCausalLM
66
+ tokenizer = AutoTokenizer.from_pretrained("llmware/bling-stable-lm-3b-4e1t-0.1")
67
+ model = AutoModelForCausalLM.from_pretrained("llmware/bling-stable-lm-3b-4e1t-0.1")
68
 
69
 
70
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
 
81
  my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
82
 
83
 
84
+ ## Citations
85
+
86
+ This model is fine-tuned on the StableLM-3B-4E1T model from StabilityAI. For more information about this base model, please see the citation below:
87
+
88
+ @misc{StableLM-3B-4E1T,
89
+ url={[https://huggingface.co/stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t)},
90
+ title={StableLM 3B 4E1T},
91
+ author={Tow, Jonathan and Bellagente, Marco and Mahan, Dakota and Riquelme, Carlos}
92
+ }
93
+
94
 
95
  ## Model Card Contact
96