README / README.md
mgoin's picture
Update README.md
31afc66 verified
|
raw
history blame
1.41 kB
---
title: README
emoji: 💻
colorFrom: purple
colorTo: blue
sdk: static
pinned: false
---
# Software-Delivered AI Inference
Neural Magic helps developers in accelerating deep learning performance using automated model sparsification technologies and inference engines.
Download our sparsity-aware inference engines and open source tools for fast model inference.
* [NM-vLLM](https://github.com/neuralmagic/nm-vllm): A high-throughput and memory-efficient inference engine for LLMs, incorporating the latest LLM optimizations like quantization and sparsity
* [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
* [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
* [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
**✨NEW✨ DeepSparse LLMs**: We are excited to announce our paper on Sparse Fine-Tuning of LLMs, starting with MPT and Llama 2. Check out the [paper](https://arxiv.org/abs/2310.06927), [models](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true), and [usage](https://research.neuralmagic.com/mpt-sparse-finetuning).