rajabmondal commited on
Commit
0de3f13
·
verified ·
1 Parent(s): 62e4544

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -62
README.md CHANGED
@@ -122,6 +122,69 @@ Under Download Model, you can enter the model repo: infosys/NT-Java-1.1B-GGUF an
122
 
123
  Then click Download.
124
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125
 
126
  ## How to use with Ollama
127
 
@@ -193,68 +256,6 @@ public class HelloWorld {\n public static void main(String[] args) {
193
  Your browser should open automatically and display a chat interface. (If it doesn't, just open your browser and point it at http://localhost:8080)
194
 
195
 
196
- ### On the command line, including multiple files at once
197
-
198
- I recommend using the `huggingface-hub` Python library:
199
-
200
- ```shell
201
- pip3 install huggingface-hub
202
- ```
203
-
204
- Then you can download any individual model file to the current directory, at high speed, with a command like this:
205
-
206
- ```shell
207
- huggingface-cli download infosys/NT-Java-1.1B-GGUF NT-Java-1.1B_Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
208
- ```
209
-
210
- <details>
211
- <summary>More advanced huggingface-cli download usage (click to read)</summary>
212
-
213
- You can also download multiple files at once with a pattern:
214
-
215
- ```shell
216
- huggingface-cli download infosys/NT-Java-1.1B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
217
- ```
218
-
219
- For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
220
-
221
- To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
222
-
223
- ```shell
224
- pip3 install hf_transfer
225
- ```
226
-
227
- And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
228
-
229
- ```shell
230
- HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download infosys/NT-Java-1.1B-GGUF NT-Java-1.1B_Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
231
- ```
232
-
233
- Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
234
- </details>
235
- <!-- README_GGUF.md-how-to-download end -->
236
-
237
- <!-- README_GGUF.md-how-to-run start -->
238
- ## Example `llama.cpp` command
239
-
240
- Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
241
-
242
- ```shell
243
- ./main -ngl 35 -m NT-Java-1.1B_Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] {prompt} [/INST]"
244
- ```
245
-
246
- Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
247
-
248
- Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
249
-
250
- If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
251
-
252
- For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
253
-
254
- ## How to run in `text-generation-webui`
255
-
256
- Further instructions here: [text-generation-webui/docs/04 - Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
257
-
258
  ## How to run from Python code
259
 
260
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) version 0.2.23 and later.
 
122
 
123
  Then click Download.
124
 
125
+ ### On the command line, including multiple files at once
126
+
127
+ I recommend using the `huggingface-hub` Python library:
128
+
129
+ ```shell
130
+ pip3 install huggingface-hub
131
+ ```
132
+
133
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
134
+
135
+ ```shell
136
+ huggingface-cli download infosys/NT-Java-1.1B-GGUF NT-Java-1.1B_Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
137
+ ```
138
+
139
+ <details>
140
+ <summary>More advanced huggingface-cli download usage (click to read)</summary>
141
+
142
+ You can also download multiple files at once with a pattern:
143
+
144
+ ```shell
145
+ huggingface-cli download infosys/NT-Java-1.1B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
146
+ ```
147
+
148
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
149
+
150
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
151
+
152
+ ```shell
153
+ pip3 install hf_transfer
154
+ ```
155
+
156
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
157
+
158
+ ```shell
159
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download infosys/NT-Java-1.1B-GGUF NT-Java-1.1B_Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
160
+ ```
161
+
162
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
163
+ </details>
164
+ <!-- README_GGUF.md-how-to-download end -->
165
+
166
+ <!-- README_GGUF.md-how-to-run start -->
167
+ ## Example `llama.cpp` command
168
+
169
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
170
+
171
+ ```shell
172
+ ./main -ngl 35 -m NT-Java-1.1B_Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] {prompt} [/INST]"
173
+ ```
174
+
175
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
176
+
177
+ Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
178
+
179
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
180
+
181
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
182
+
183
+ ## How to run in `text-generation-webui`
184
+
185
+ Further instructions here: [text-generation-webui/docs/04 - Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
186
+
187
+
188
 
189
  ## How to use with Ollama
190
 
 
256
  Your browser should open automatically and display a chat interface. (If it doesn't, just open your browser and point it at http://localhost:8080)
257
 
258
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
259
  ## How to run from Python code
260
 
261
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) version 0.2.23 and later.