Ctranslate2 8-bit quantization of EXAONE-3.5-2.4B-Instruct.

However, this would not have been possible without the model first being converted to Llama format from here

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ctranslate2-4you/EXAONE-3.5-2.4B-Instruct-ct2-int8

Quantized
(17)
this model

Collection including ctranslate2-4you/EXAONE-3.5-2.4B-Instruct-ct2-int8