docs: update org name to ggml-org
Browse files
README.md
CHANGED
@@ -2499,7 +2499,7 @@ model-index:
|
|
2499 |
value: 78.5277880014722
|
2500 |
---
|
2501 |
|
2502 |
-
#
|
2503 |
This model was converted to GGUF format from [`intfloat/e5-small-v2`](https://huggingface.co/intfloat/e5-small-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
2504 |
Refer to the [original model card](https://huggingface.co/intfloat/e5-small-v2) for more details on the model.
|
2505 |
|
@@ -2514,12 +2514,12 @@ Invoke the llama.cpp server or the CLI.
|
|
2514 |
|
2515 |
### CLI:
|
2516 |
```bash
|
2517 |
-
llama-cli --hf-repo
|
2518 |
```
|
2519 |
|
2520 |
### Server:
|
2521 |
```bash
|
2522 |
-
llama-server --hf-repo
|
2523 |
```
|
2524 |
|
2525 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
@@ -2536,9 +2536,9 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
2536 |
|
2537 |
Step 3: Run inference through the main binary.
|
2538 |
```
|
2539 |
-
./llama-cli --hf-repo
|
2540 |
```
|
2541 |
or
|
2542 |
```
|
2543 |
-
./llama-server --hf-repo
|
2544 |
```
|
|
|
2499 |
value: 78.5277880014722
|
2500 |
---
|
2501 |
|
2502 |
+
# ggml-org/e5-small-v2-Q8_0-GGUF
|
2503 |
This model was converted to GGUF format from [`intfloat/e5-small-v2`](https://huggingface.co/intfloat/e5-small-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
2504 |
Refer to the [original model card](https://huggingface.co/intfloat/e5-small-v2) for more details on the model.
|
2505 |
|
|
|
2514 |
|
2515 |
### CLI:
|
2516 |
```bash
|
2517 |
+
llama-cli --hf-repo ggml-org/e5-small-v2-Q8_0-GGUF --hf-file e5-small-v2-q8_0.gguf -p "The meaning to life and the universe is"
|
2518 |
```
|
2519 |
|
2520 |
### Server:
|
2521 |
```bash
|
2522 |
+
llama-server --hf-repo ggml-org/e5-small-v2-Q8_0-GGUF --hf-file e5-small-v2-q8_0.gguf -c 2048
|
2523 |
```
|
2524 |
|
2525 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
2536 |
|
2537 |
Step 3: Run inference through the main binary.
|
2538 |
```
|
2539 |
+
./llama-cli --hf-repo ggml-org/e5-small-v2-Q8_0-GGUF --hf-file e5-small-v2-q8_0.gguf -p "The meaning to life and the universe is"
|
2540 |
```
|
2541 |
or
|
2542 |
```
|
2543 |
+
./llama-server --hf-repo ggml-org/e5-small-v2-Q8_0-GGUF --hf-file e5-small-v2-q8_0.gguf -c 2048
|
2544 |
```
|