Msp commited on
Commit
1b5a47f
·
verified ·
1 Parent(s): 2a451eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -75,6 +75,45 @@ MIRA is licensed under apache-2.0. Please refer to the LICENSE file for more det
75
  - **Finetuned from model:** [unsloth/meta-llama-3.1-8b-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-bnb-4bit)
76
 
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
79
 
80
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
75
  - **Finetuned from model:** [unsloth/meta-llama-3.1-8b-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-bnb-4bit)
76
 
77
 
78
+ To Load this model using unsloth
79
+
80
+ ``` from unsloth import FastLanguageModel
81
+ model, tokenizer = FastLanguageModel.from_pretrained(
82
+ model_name = "Msp/mira-1.0", # YOUR MODEL YOU USED FOR TRAINING
83
+ max_seq_length = 4096,
84
+ dtype = None,
85
+ load_in_4bit = True,
86
+ #token="hf.."
87
+ )
88
+ FastLanguageModel.for_inference(model) # Enable native 2x faster inference
89
+
90
+ alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
91
+
92
+ ### Instruction:
93
+ {}
94
+
95
+ ### Input:
96
+ {}
97
+
98
+ ### Response:
99
+ {}"""
100
+
101
+ inputs = tokenizer(
102
+ [
103
+ alpaca_prompt.format(
104
+ "your name is mira", # instruction
105
+ "whats your name", # input
106
+ "", # output - leave this blank for generation!
107
+ )
108
+ ], return_tensors = "pt").to("cuda")
109
+
110
+ from transformers import TextStreamer
111
+ text_streamer = TextStreamer(tokenizer)
112
+ _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)
113
+
114
+ ```
115
+
116
+
117
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
118
 
119
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)