jrc commited on
Commit
8a7a620
·
verified ·
1 Parent(s): efa6ebf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -25,7 +25,7 @@ Phi-3 Mini 4k Instruct model finetuned on math datasets.
25
 
26
  Use the code below to get started with the model.
27
 
28
- ```
29
  # Load model directly
30
  from transformers import AutoTokenizer, AutoModelForCausalLM
31
 
@@ -38,7 +38,7 @@ model = AutoModelForCausalLM.from_pretrained("jrc/phi3-mini-math", trust_remote_
38
  Phi3 was trained using [torchtune]() and the training script + config file are located in this repository.
39
 
40
  CMD:
41
- ```
42
  tune run lora_finetune_distributed.py --config mini_lora.yaml
43
  ```
44
 
@@ -57,7 +57,7 @@ tune run lora_finetune_distributed.py --config mini_lora.yaml
57
  The finetuned model is evaluated on [minerva-math](https://research.google/blog/minerva-solving-quantitative-reasoning-problems-with-language-models/) using [EleutherAI Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) through torchtune.
58
 
59
  CMD:
60
- ```
61
  tune run eleuther_eval --config eleuther_evaluation \
62
  checkpoint.checkpoint_dir=./lora-phi3-math \
63
  tasks=["minerva_math"] \
@@ -65,7 +65,7 @@ tune run eleuther_eval --config eleuther_evaluation \
65
  ```
66
 
67
  RESULTS:
68
- ```
69
  | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
70
  |------------------------------------|-------|------|-----:|-----------|-----:|---|-----:|
71
  |minerva_math |N/A |none | 4|exact_match|0.1670|± |0.0051|
@@ -76,7 +76,7 @@ RESULTS:
76
  | - minerva_math_num_theory | 1|none | 4|exact_match|0.1148|± |0.0137|
77
  | - minerva_math_prealgebra | 1|none | 4|exact_match|0.3077|± |0.0156|
78
  | - minerva_math_precalc | 1|none | 4|exact_match|0.0623|± |0.0104|
79
- ```
80
 
81
 
82
  ## Technical Specifications [optional]
 
25
 
26
  Use the code below to get started with the model.
27
 
28
+ ```python
29
  # Load model directly
30
  from transformers import AutoTokenizer, AutoModelForCausalLM
31
 
 
38
  Phi3 was trained using [torchtune]() and the training script + config file are located in this repository.
39
 
40
  CMD:
41
+ ```bash
42
  tune run lora_finetune_distributed.py --config mini_lora.yaml
43
  ```
44
 
 
57
  The finetuned model is evaluated on [minerva-math](https://research.google/blog/minerva-solving-quantitative-reasoning-problems-with-language-models/) using [EleutherAI Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) through torchtune.
58
 
59
  CMD:
60
+ ```bash
61
  tune run eleuther_eval --config eleuther_evaluation \
62
  checkpoint.checkpoint_dir=./lora-phi3-math \
63
  tasks=["minerva_math"] \
 
65
  ```
66
 
67
  RESULTS:
68
+
69
  | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
70
  |------------------------------------|-------|------|-----:|-----------|-----:|---|-----:|
71
  |minerva_math |N/A |none | 4|exact_match|0.1670|± |0.0051|
 
76
  | - minerva_math_num_theory | 1|none | 4|exact_match|0.1148|± |0.0137|
77
  | - minerva_math_prealgebra | 1|none | 4|exact_match|0.3077|± |0.0156|
78
  | - minerva_math_precalc | 1|none | 4|exact_match|0.0623|± |0.0104|
79
+
80
 
81
 
82
  ## Technical Specifications [optional]