OPEA
/

Safetensors
llama
2-bit
intel/auto-round
cicdatopea commited on
Commit
7c29e48
·
verified ·
1 Parent(s): e881b39
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -100,10 +100,10 @@ prompt = "Once upon a time,"
100
 
101
  pip3 install lm-eval==0.4.7
102
 
103
- we found lm-eval is very unstable for this model. Please set `add_bos_token=True `to align with the origin model. Please use autogptq format
104
 
105
  ```bash
106
- lm-eval --model hf --model_args pretrained=OPEA/Llama-3.3-70B-Instruct-int3-sym-inc,add_bos_token=True --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k --batch_size 16
107
  ```
108
  | Metric | BF16(lm-eval==0.4.5) | W2G32 With BOS | BF16(lm-eval==0.4.7 with BOS) | WO BOS |
109
  | :------------------------: | :----------------------: | ------------------------- | ----------------------------- | :---------------: |
 
100
 
101
  pip3 install lm-eval==0.4.7
102
 
103
+ we found lm-eval is very unstable for this model. Please set `add_bos_token=True `to align with the origin model. **Please use autogptq format**
104
 
105
  ```bash
106
+ lm-eval --model hf --model_args pretrained=OPEA/Llama-3.3-70B-Instruct-int2-sym-inc,add_bos_token=True --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k --batch_size 16
107
  ```
108
  | Metric | BF16(lm-eval==0.4.5) | W2G32 With BOS | BF16(lm-eval==0.4.7 with BOS) | WO BOS |
109
  | :------------------------: | :----------------------: | ------------------------- | ----------------------------- | :---------------: |