Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
amd
/
Mistral-7B-Instruct-v0.3-awq-g128-int4-asym-fp16-onnx-hybrid
like
0
Follow
AMD
1.43k
ONNX
License:
apache-2.0
Model card
Files
Files and versions
Community
main
Mistral-7B-Instruct-v0.3-awq-g128-int4-asym-fp16-onnx-hybrid
3 contributors
History:
12 commits
satreysa
Update README.md
81af4c3
verified
11 days ago
.gitattributes
Safe
1.59 kB
Upload 9 files
4 months ago
Mistral-7B-Instruct-v0.3_jit.bin
3.87 GB
LFS
Upload 9 files
4 months ago
Mistral-7B-Instruct-v0.3_jit.onnx
Safe
294 kB
LFS
Upload 9 files
4 months ago
Mistral-7B-Instruct-v0.3_jit.onnx.data
3.97 GB
LFS
Upload 9 files
4 months ago
Mistral-7B-Instruct-v0.3_jit.pb.bin
7.7 kB
LFS
Upload 9 files
4 months ago
README.md
2.38 kB
Update README.md
11 days ago
config.json
Safe
2 Bytes
Create config.json
4 months ago
genai_config.json
Safe
1.74 kB
Upload 9 files
4 months ago
special_tokens_map.json
Safe
551 Bytes
Upload 9 files
4 months ago
tokenizer.json
Safe
3.67 MB
Upload 9 files
4 months ago
tokenizer.model
Safe
587 kB
LFS
Upload 9 files
4 months ago
tokenizer_config.json
Safe
141 kB
Upload 9 files
4 months ago