Update README.md
Browse files
README.md
CHANGED
|
@@ -25,10 +25,23 @@ Phi-3 Mini 4k Instruct model finetuned on math datasets.
|
|
| 25 |
|
| 26 |
Use the code below to get started with the model.
|
| 27 |
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
## Training Details
|
| 31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
### Training Data
|
| 33 |
|
| 34 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
|
|
|
| 25 |
|
| 26 |
Use the code below to get started with the model.
|
| 27 |
|
| 28 |
+
```
|
| 29 |
+
# Load model directly
|
| 30 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 31 |
+
|
| 32 |
+
tokenizer = AutoTokenizer.from_pretrained("jrc/phi3-mini-math", trust_remote_code=True)
|
| 33 |
+
model = AutoModelForCausalLM.from_pretrained("jrc/phi3-mini-math", trust_remote_code=True)
|
| 34 |
+
```
|
| 35 |
|
| 36 |
## Training Details
|
| 37 |
|
| 38 |
+
Phi3 was trained using [torchtune]() and the training script + config file are located in this repository.
|
| 39 |
+
|
| 40 |
+
CMD:
|
| 41 |
+
```
|
| 42 |
+
tune run lora_finetune_distributed.py --config mini_lora.yaml
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
### Training Data
|
| 46 |
|
| 47 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|