Llama 2 Chat 7B for RK3588

This is a conversion from https://huggingface.co/meta-llama/Llama-2-7b-chat-hf to the RKLLM format for Rockchip devices. This runs on the NPU from the RK3588.

Converted with RKLLM runtime 1.0.1 using docker template from https://huggingface.co/Pelochus

License

Same as the original LLM:

https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/blob/main/LICENSE.txt

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.