Update README.md
Browse files
README.md
CHANGED
@@ -12,10 +12,14 @@ base_model:
|
|
12 |
- mlabonne/AlphaMonarch-7B
|
13 |
- automerger/OgnoExperiment27-7B
|
14 |
- Kukedlc/Jupiter-k-7B-slerp
|
|
|
15 |
---
|
16 |
|
17 |
# NeuralShiva-7B-DT
|
18 |
|
|
|
|
|
|
|
19 |
NeuralShiva-7B-DT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
20 |
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
|
21 |
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
|
@@ -103,4 +107,4 @@ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in
|
|
103 |
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
104 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
105 |
print(outputs[0]["generated_text"])
|
106 |
-
```
|
|
|
12 |
- mlabonne/AlphaMonarch-7B
|
13 |
- automerger/OgnoExperiment27-7B
|
14 |
- Kukedlc/Jupiter-k-7B-slerp
|
15 |
+
license: apache-2.0
|
16 |
---
|
17 |
|
18 |
# NeuralShiva-7B-DT
|
19 |
|
20 |
+
|
21 |
+

|
22 |
+
|
23 |
NeuralShiva-7B-DT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
24 |
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
|
25 |
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
|
|
|
107 |
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
108 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
109 |
print(outputs[0]["generated_text"])
|
110 |
+
```
|