Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

seonglae
/
llama-2-13b-chat-hf-gptq

Text Generation
Transformers
llama
llama-2
llama2
gptq
auto-gptq
13b
4bit
quantization
Model card Files Files and versions Community
1
llama-2-13b-chat-hf-gptq
Ctrl+K
Ctrl+K
  • 1 contributor
History: 5 commits
seonglae's picture
seonglae
Update README.md
584121a about 2 years ago
  • .gitattributes
    1.52 kB
    initial commit about 2 years ago
  • README.md
    995 Bytes
    Update README.md about 2 years ago
  • config.json
    625 Bytes
    build: AutoGPTQ for meta-llama/Llama-2-13b-chat-hf: 4bits, gr128, desc_act=False about 2 years ago
  • generation_config.json
    170 Bytes
    build: AutoGPTQ for meta-llama/Llama-2-13b-chat-hf: 4bits, gr128, desc_act=False about 2 years ago
  • gptq_model-4bit-128g.safetensors
    7.26 GB
    LFS
    build: AutoGPTQ for meta-llama/Llama-2-13b-chat-hf: 4bits, gr128, desc_act=False about 2 years ago
  • quantize_config.json
    225 Bytes
    build: AutoGPTQ for meta-llama/Llama-2-13b-chat-hf: 4bits, gr128, desc_act=False about 2 years ago