Suparious commited on
Commit
aa6c68a
·
verified ·
1 Parent(s): fecce8d

add model card

Browse files
Files changed (1) hide show
  1. README.md +124 -0
README.md CHANGED
@@ -1,5 +1,129 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
4
 
5
  **UPLOAD IN PROGRESS**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ tags:
4
+ - medical
5
+ - science
6
+ - biology
7
+ - chemistry
8
+ - not-for-all-audiences
9
+ - quantized
10
+ - 4-bit
11
+ - AWQ
12
+ - text-generation
13
+ - autotrain_compatible
14
+ - endpoints_compatible
15
+ - chatml
16
  license: apache-2.0
17
+ datasets:
18
+ - Locutusque/hercules-v4.0
19
+ language:
20
+ - en
21
+ model_creator: Locutusque
22
+ model_name: SlimHercules-4.0-Mistral-7B-v0.2
23
+ base_model: alpindale/Mistral-7B-v0.2-hf
24
+ model_type: mistral
25
+ pipeline_tag: text-generation
26
+ inference: false
27
+ prompt_template: '<|im_start|>system
28
+
29
+ {system_message}<|im_end|>
30
+
31
+ <|im_start|>user
32
+
33
+ {prompt}<|im_end|>
34
+
35
+ <|im_start|>assistant
36
+
37
+ '
38
+ quantized_by: Suparious
39
  ---
40
+ # Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 AWQ
41
 
42
  **UPLOAD IN PROGRESS**
43
+
44
+ - Model creator: [Locutusque](https://huggingface.co/Locutusque)
45
+ - Original model: [SlimHercules-4.0-Mistral-7B-v0.2](https://huggingface.co/Locutusque/SlimHercules-4.0-Mistral-7B-v0.2)
46
+
47
+ ![image/png](https://tse3.mm.bing.net/th/id/OIG1.vnrl3xpEcypR3McLW63q?pid=ImgGn)
48
+
49
+ ## Model Summary
50
+
51
+ limHercules-4.0-Mistral-v0.2-7B is a fine-tuned language model derived from Mistralai/Mistral-7B-v0.2. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. The dataset used for fine-tuning, also named hercules-v4.0, expands upon the diverse capabilities of OpenHermes-2.5 with contributions from numerous curated datasets. This fine-tuning has hercules-v4.0 with enhanced abilities in:
52
+
53
+ - Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
54
+ - Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
55
+ - Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more.
56
+
57
+ This model is different in the sense that the dataset was shrunk and not shuffled, that way every dataset could be incorporated, without performance loss. This, in theory, should have much better performance in comparison to it's predecessors.
58
+
59
+ ## How to use
60
+
61
+ ### Install the necessary packages
62
+
63
+ ```bash
64
+ pip install --upgrade autoawq autoawq-kernels
65
+ ```
66
+
67
+ ### Example Python code
68
+
69
+ ```python
70
+ from awq import AutoAWQForCausalLM
71
+ from transformers import AutoTokenizer, TextStreamer
72
+
73
+ model_path = "solidrust/SlimHercules-4.0-Mistral-7B-v0.2-AWQ"
74
+ system_message = "You are Hercules, incarnated as a powerful AI."
75
+
76
+ # Load model
77
+ model = AutoAWQForCausalLM.from_quantized(model_path,
78
+ fuse_layers=True)
79
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
80
+ trust_remote_code=True)
81
+ streamer = TextStreamer(tokenizer,
82
+ skip_prompt=True,
83
+ skip_special_tokens=True)
84
+
85
+ # Convert prompt to tokens
86
+ prompt_template = """\
87
+ <|im_start|>system
88
+ {system_message}<|im_end|>
89
+ <|im_start|>user
90
+ {prompt}<|im_end|>
91
+ <|im_start|>assistant"""
92
+
93
+ prompt = "You're standing on the surface of the Earth. "\
94
+ "You walk one mile south, one mile west and one mile north. "\
95
+ "You end up exactly where you started. Where are you?"
96
+
97
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
98
+ return_tensors='pt').input_ids.cuda()
99
+
100
+ # Generate output
101
+ generation_output = model.generate(tokens,
102
+ streamer=streamer,
103
+ max_new_tokens=512)
104
+
105
+ ```
106
+
107
+ ### About AWQ
108
+
109
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
110
+
111
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
112
+
113
+ It is supported by:
114
+
115
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
116
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
117
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
118
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
119
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
120
+
121
+ ## Prompt template: ChatML
122
+
123
+ ```plaintext
124
+ <|im_start|>system
125
+ {system_message}<|im_end|>
126
+ <|im_start|>user
127
+ {prompt}<|im_end|>
128
+ <|im_start|>assistant
129
+ ```