Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -11,6 +11,105 @@ tags:
11
  - qwen
12
  - distill
13
  - cot
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
  # **Qwen-7B-Distill-Reasoner**
16
 
@@ -68,4 +167,18 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
68
  3. **Bias in Training Data:** The model's outputs may reflect biases present in the datasets it was fine-tuned on, which could limit its objectivity in certain contexts.
69
  4. **Performance on Non-Reasoning Tasks:** The model is optimized for chain-of-thought reasoning and may underperform on tasks that require simpler, less structured responses.
70
  5. **Resource-Intensive:** Running the model efficiently requires significant computational resources, which may limit accessibility for smaller-scale deployments.
71
- 6. **Dependence on Input Quality:** The model’s performance heavily depends on the clarity and quality of the input provided. Ambiguous or poorly structured prompts may yield suboptimal results.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - qwen
12
  - distill
13
  - cot
14
+ model-index:
15
+ - name: Qwen-7B-Distill-Reasoner
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ name: Text Generation
20
+ dataset:
21
+ name: IFEval (0-Shot)
22
+ type: wis-k/instruction-following-eval
23
+ split: train
24
+ args:
25
+ num_few_shot: 0
26
+ metrics:
27
+ - type: inst_level_strict_acc and prompt_level_strict_acc
28
+ value: 33.96
29
+ name: averaged accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: BBH (3-Shot)
38
+ type: SaylorTwift/bbh
39
+ split: test
40
+ args:
41
+ num_few_shot: 3
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 22.18
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MATH Lvl 5 (4-Shot)
54
+ type: lighteval/MATH-Hard
55
+ split: test
56
+ args:
57
+ num_few_shot: 4
58
+ metrics:
59
+ - type: exact_match
60
+ value: 21.15
61
+ name: exact match
62
+ source:
63
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
64
+ name: Open LLM Leaderboard
65
+ - task:
66
+ type: text-generation
67
+ name: Text Generation
68
+ dataset:
69
+ name: GPQA (0-shot)
70
+ type: Idavidrein/gpqa
71
+ split: train
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: acc_norm
76
+ value: 10.29
77
+ name: acc_norm
78
+ source:
79
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: MuSR (0-shot)
86
+ type: TAUR-Lab/MuSR
87
+ args:
88
+ num_few_shot: 0
89
+ metrics:
90
+ - type: acc_norm
91
+ value: 2.78
92
+ name: acc_norm
93
+ source:
94
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: MMLU-PRO (5-shot)
101
+ type: TIGER-Lab/MMLU-Pro
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 20.2
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
112
+ name: Open LLM Leaderboard
113
  ---
114
  # **Qwen-7B-Distill-Reasoner**
115
 
 
167
  3. **Bias in Training Data:** The model's outputs may reflect biases present in the datasets it was fine-tuned on, which could limit its objectivity in certain contexts.
168
  4. **Performance on Non-Reasoning Tasks:** The model is optimized for chain-of-thought reasoning and may underperform on tasks that require simpler, less structured responses.
169
  5. **Resource-Intensive:** Running the model efficiently requires significant computational resources, which may limit accessibility for smaller-scale deployments.
170
+ 6. **Dependence on Input Quality:** The model’s performance heavily depends on the clarity and quality of the input provided. Ambiguous or poorly structured prompts may yield suboptimal results.
171
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
172
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Qwen-7B-Distill-Reasoner-details)!
173
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FQwen-7B-Distill-Reasoner&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
174
+
175
+ | Metric |Value (%)|
176
+ |-------------------|--------:|
177
+ |**Average** | 18.43|
178
+ |IFEval (0-Shot) | 33.96|
179
+ |BBH (3-Shot) | 22.18|
180
+ |MATH Lvl 5 (4-Shot)| 21.15|
181
+ |GPQA (0-shot) | 10.29|
182
+ |MuSR (0-shot) | 2.78|
183
+ |MMLU-PRO (5-shot) | 20.20|
184
+