Prajjwalng/gemma_customercare_adapters-F16-GGUF

This LoRA adapter was converted to GGUF format from Prajjwalng/gemma_customercare_adapters via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora gemma_customercare_adapters-f16.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora gemma_customercare_adapters-f16.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
13
GGUF
Model size
83.1M params
Architecture
gemma2

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Prajjwalng/gemma_customercare_adapters-F16-GGUF

Base model

google/gemma-2-2b
Quantized
(1)
this model

Dataset used to train Prajjwalng/gemma_customercare_adapters-F16-GGUF