jfinks25 commited on
Commit
4aac5a2
·
verified ·
1 Parent(s): 092b9b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ pinned: false
11
 
12
  **If you are looking for compressed models to run with vLLM, they have been moved to the [RedHatAI](https://huggingface.co/RedHatAI) organization. We are looking forward to continue publishing optimized models for open source use!**
13
 
14
- [Neural Magic](https://neuralmagic.com/) helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
15
  Download our compression-aware inference engines and open source tools for fast model inference.
16
  * [vLLM](https://github.com/vllm-project/vllm/): A high-throughput and memory-efficient inference engine for at-scale deployment of performant open-source LLMs
17
  * [LLM Compressor](https://github.com/vllm-project/llm-compressor/): HF-native library for applying quantization and sparsity algorithms to llms for optimized deployment with vLLM
 
11
 
12
  **If you are looking for compressed models to run with vLLM, they have been moved to the [RedHatAI](https://huggingface.co/RedHatAI) organization. We are looking forward to continue publishing optimized models for open source use!**
13
 
14
+ [Neural Magic](https://neuralmagic.com/) (Acquired by Red Hat) helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
15
  Download our compression-aware inference engines and open source tools for fast model inference.
16
  * [vLLM](https://github.com/vllm-project/vllm/): A high-throughput and memory-efficient inference engine for at-scale deployment of performant open-source LLMs
17
  * [LLM Compressor](https://github.com/vllm-project/llm-compressor/): HF-native library for applying quantization and sparsity algorithms to llms for optimized deployment with vLLM