Update README.md
Browse files
README.md
CHANGED
@@ -42,11 +42,11 @@ Below is a partial list of clients and libraries known to support GGUF:
|
|
42 |
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The foundational project for GGUF, featuring both a command-line interface (CLI) and server options.
|
43 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), A highly utilized web UI offering extensive features and robust extensions, supporting GPU acceleration.
|
44 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI with full GPU acceleration across all platforms and architectures, particularly effective for storytelling.
|
45 |
-
* [LM Studio](https://lmstudio.ai/),
|
46 |
-
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui),
|
47 |
-
* [Faraday.dev](https://faraday.dev/),
|
48 |
-
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python),
|
49 |
-
* [candle](https://github.com/huggingface/candle),
|
50 |
|
51 |
<!-- README_GGUF.md-about-gguf end -->
|
52 |
|
|
|
42 |
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The foundational project for GGUF, featuring both a command-line interface (CLI) and server options.
|
43 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), A highly utilized web UI offering extensive features and robust extensions, supporting GPU acceleration.
|
44 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI with full GPU acceleration across all platforms and architectures, particularly effective for storytelling.
|
45 |
+
* [LM Studio](https://lmstudio.ai/), An intuitive and powerful local GUI designed for Windows and macOS (Silicon), featuring GPU acceleration for enhanced performance.
|
46 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), A notable web UI with distinctive features, including a comprehensive model library for easy model selection.
|
47 |
+
* [Faraday.dev](https://faraday.dev/), A user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), boasting GPU acceleration for smooth operation.
|
48 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library offering GPU acceleration, LangChain support, and compatibility with OpenAI's API server.
|
49 |
+
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework prioritizing performance, equipped with GPU support and designed for ease of use.
|
50 |
|
51 |
<!-- README_GGUF.md-about-gguf end -->
|
52 |
|