jeiku commited on
Commit
6a68ae4
·
verified ·
1 Parent(s): 5c82cbd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -40
README.md CHANGED
@@ -1,47 +1,12 @@
1
  ---
2
  tags:
3
- - merge
4
- - mergekit
5
- - lazymergekit
6
  ---
7
 
8
- # Very_Berry_Qwen2_7B
9
-
10
- Very_Berry_Qwen2_7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
11
-
12
- ## 🧩 Configuration
13
-
14
- ```yaml
15
- models:
16
- - model: jeiku/qwen2base+ResplendentAI/Qwen_Sissification_LoRA_128
17
- - model: jeiku/qwen2base+ResplendentAI/Qwen_Soul_LoRA_128
18
- - model: jeiku/qwen2base+ResplendentAI/Qwen_jeiku_LoRA_128
19
- merge_method: model_stock
20
- base_model: jeiku/qwen2base
21
- dtype: bfloat16
22
- ```
23
 
24
- ## 💻 Usage
25
-
26
- ```python
27
- !pip install -qU transformers accelerate
28
-
29
- from transformers import AutoTokenizer
30
- import transformers
31
- import torch
32
-
33
- model = "jeiku/Very_Berry_Qwen2_7B"
34
- messages = [{"role": "user", "content": "What is a large language model?"}]
35
 
36
- tokenizer = AutoTokenizer.from_pretrained(model)
37
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
38
- pipeline = transformers.pipeline(
39
- "text-generation",
40
- model=model,
41
- torch_dtype=torch.float16,
42
- device_map="auto",
43
- )
44
 
45
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
46
- print(outputs[0]["generated_text"])
47
- ```
 
1
  ---
2
  tags:
3
+ - not-for-all-audiences
4
+ license: apache-2.0
 
5
  ---
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
+ # Very_Berry_Qwen2_7B
 
 
 
 
 
 
 
 
 
 
9
 
10
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/1J817kx3zZccf5yvQYiGM.png)
 
 
 
 
 
 
 
11
 
12
+ It do the stuff.