Update README.md
Browse files
README.md
CHANGED
@@ -38,6 +38,34 @@ state of the art AI models and helping foster innovation for everyone.
|
|
38 |
|
39 |
Google/Gemma has shared some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
|
40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
#### Fine-tuning the model
|
42 |
|
43 |
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
|
|
|
38 |
|
39 |
Google/Gemma has shared some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
|
40 |
|
41 |
+
hf_model_repo = "Geerath/Google_gemma_web_questions"
|
42 |
+
|
43 |
+
# Get the tokenizer
|
44 |
+
tokenizer = AutoTokenizer.from_pretrained(hf_model_repo)
|
45 |
+
|
46 |
+
# Load the model
|
47 |
+
|
48 |
+
|
49 |
+
model = AutoModelForCausalLM.from_pretrained(hf_model_repo,
|
50 |
+
quantization_config=bnb_config,
|
51 |
+
device_map="auto")
|
52 |
+
|
53 |
+
prompt = ["Question: Tell me something about IISc\n\nAnswer:\n"]
|
54 |
+
|
55 |
+
# Generate response
|
56 |
+
%%time
|
57 |
+
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids
|
58 |
+
outputs = model.generate(input_ids=input_ids,
|
59 |
+
max_new_tokens=200,
|
60 |
+
do_sample = True,
|
61 |
+
temperature=0.2)
|
62 |
+
|
63 |
+
result = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
|
64 |
+
|
65 |
+
result = "Question:"+result.split("Question:")[1]
|
66 |
+
|
67 |
+
# Print the result
|
68 |
+
print(f"Generated response:\n{result}")
|
69 |
#### Fine-tuning the model
|
70 |
|
71 |
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
|