jrc commited on
Commit
2e99b69
·
verified ·
1 Parent(s): 8a7a620

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -25
README.md CHANGED
@@ -37,7 +37,6 @@ model = AutoModelForCausalLM.from_pretrained("jrc/phi3-mini-math", trust_remote_
37
 
38
  Phi3 was trained using [torchtune]() and the training script + config file are located in this repository.
39
 
40
- CMD:
41
  ```bash
42
  tune run lora_finetune_distributed.py --config mini_lora.yaml
43
  ```
@@ -56,7 +55,6 @@ tune run lora_finetune_distributed.py --config mini_lora.yaml
56
 
57
  The finetuned model is evaluated on [minerva-math](https://research.google/blog/minerva-solving-quantitative-reasoning-problems-with-language-models/) using [EleutherAI Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) through torchtune.
58
 
59
- CMD:
60
  ```bash
61
  tune run eleuther_eval --config eleuther_evaluation \
62
  checkpoint.checkpoint_dir=./lora-phi3-math \
@@ -64,7 +62,6 @@ tune run eleuther_eval --config eleuther_evaluation \
64
  batch_size=32
65
  ```
66
 
67
- RESULTS:
68
 
69
  | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
70
  |------------------------------------|-------|------|-----:|-----------|-----:|---|-----:|
@@ -87,28 +84,6 @@ RESULTS:
87
 
88
  Max VRAM used per GPU: 29 GB
89
 
90
- ## Citation [optional]
91
-
92
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
93
-
94
- **BibTeX:**
95
-
96
- [More Information Needed]
97
-
98
- **APA:**
99
-
100
- [More Information Needed]
101
-
102
- ## Glossary [optional]
103
-
104
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
105
-
106
- [More Information Needed]
107
-
108
- ## More Information [optional]
109
-
110
- [More Information Needed]
111
-
112
  ## Model Card Contact
113
 
114
  [More Information Needed]
 
37
 
38
  Phi3 was trained using [torchtune]() and the training script + config file are located in this repository.
39
 
 
40
  ```bash
41
  tune run lora_finetune_distributed.py --config mini_lora.yaml
42
  ```
 
55
 
56
  The finetuned model is evaluated on [minerva-math](https://research.google/blog/minerva-solving-quantitative-reasoning-problems-with-language-models/) using [EleutherAI Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) through torchtune.
57
 
 
58
  ```bash
59
  tune run eleuther_eval --config eleuther_evaluation \
60
  checkpoint.checkpoint_dir=./lora-phi3-math \
 
62
  batch_size=32
63
  ```
64
 
 
65
 
66
  | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
67
  |------------------------------------|-------|------|-----:|-----------|-----:|---|-----:|
 
84
 
85
  Max VRAM used per GPU: 29 GB
86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
  ## Model Card Contact
88
 
89
  [More Information Needed]