rajabmondal commited on
Commit
1482fe5
·
verified ·
1 Parent(s): 01ce6ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -5
README.md CHANGED
@@ -45,7 +45,7 @@ Below is a partial list of clients and libraries known to support GGUF:
45
  * [LM Studio](https://lmstudio.ai/), An intuitive and powerful local GUI designed for Windows and macOS (Silicon), featuring GPU acceleration for enhanced performance.
46
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), A notable web UI with distinctive features, including a comprehensive model library for easy model selection.
47
  * [Faraday.dev](https://faraday.dev/), A user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), boasting GPU acceleration for smooth operation.
48
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library offering GPU acceleration, LangChain support, and compatibility with OpenAI's API server.
49
  * [candle](https://github.com/huggingface/candle), A Rust-based ML framework prioritizing performance, equipped with GPU support and designed for ease of use.
50
 
51
  <!-- README_GGUF.md-about-gguf end -->
@@ -64,10 +64,7 @@ Below is a partial list of clients and libraries known to support GGUF:
64
  <!-- compatibility_gguf start -->
65
  ## Compatibility
66
 
67
- These NT-Java-1.1B GGUFs are supported by llama.cpp starting from May 29th, 2024.
68
-
69
- They are also compatible with a variety of third-party user interfaces and libraries. For a comprehensive list, please see the beginning of this README.
70
-
71
  ## Explanation of quantisation methods
72
 
73
  <details>
 
45
  * [LM Studio](https://lmstudio.ai/), An intuitive and powerful local GUI designed for Windows and macOS (Silicon), featuring GPU acceleration for enhanced performance.
46
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), A notable web UI with distinctive features, including a comprehensive model library for easy model selection.
47
  * [Faraday.dev](https://faraday.dev/), A user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), boasting GPU acceleration for smooth operation.
48
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible API server.
49
  * [candle](https://github.com/huggingface/candle), A Rust-based ML framework prioritizing performance, equipped with GPU support and designed for ease of use.
50
 
51
  <!-- README_GGUF.md-about-gguf end -->
 
64
  <!-- compatibility_gguf start -->
65
  ## Compatibility
66
 
67
+ The NT-Java-1.1B GGUFs are supported by llama.cpp and are compatible with a range of third-party user interfaces and libraries. For a detailed list, please refer to the beginning of this README.
 
 
 
68
  ## Explanation of quantisation methods
69
 
70
  <details>