Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

此模型為 taide/Llama-3.1-TAIDE-LX-8B-Chatexllamav2 格式的量化版本。模型的規格詳情請參考原始模型頁面。

可使用 tabbyAPI 載入此模型,並以 AnythingLLM 作為互動界面。

Downloads last month
18
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for koungho/Llama-3.1-TAIDE-LX-8B-Chat-EXL2-4.5bpw

Quantized
(2)
this model