This is Chinese-Llama-2-7b f16 ggml model running llama.cpp.You can run
./main -m Chinese-Llama-2-7b-f16-ggml.bin -p 'hello world'
from model see: https://huggingface.co/LinkSoul/Chinese-Llama-2-7b
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.