Update README.md
Browse files
README.md
CHANGED
@@ -1,16 +1,26 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
|
|
|
|
|
|
|
|
|
10 |
|
11 |
|
12 |
## Model Details
|
13 |
|
|
|
|
|
14 |
### Model Description
|
15 |
|
16 |
<!-- Provide a longer summary of what this model is. -->
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: mit
|
4 |
+
datasets:
|
5 |
+
- TIGER-Lab/MathInstruct
|
6 |
+
base_model:
|
7 |
+
- Qwen/Qwen2.5-0.5B
|
8 |
---
|
9 |
|
10 |
# Model Card for Model ID
|
11 |
|
12 |
+
Qwen2.5-0.5B finetuned with MathInsturct datasets on laptop 4070 8G using llama-factory
|
13 |
|
14 |
+
Findings:
|
15 |
+
- After finetuning, the model can answer questions like 'which is bigger? 9.11 or 9.9' but still cannot count the number of r's in the word strawberry.
|
16 |
+
- I asked three math questions generated by gpt-4o, the base model can already correctly handle them. Seems like the base model is already trained on those data.
|
17 |
+
Details can be found in the inference.ipynb file.
|
18 |
|
19 |
|
20 |
## Model Details
|
21 |
|
22 |
+
Check relevent files in the repo
|
23 |
+
|
24 |
### Model Description
|
25 |
|
26 |
<!-- Provide a longer summary of what this model is. -->
|