Update README.md
Browse files
README.md
CHANGED
@@ -33,8 +33,8 @@ Our GRAG-PHI-SFT model are trained on this **[GRAG-SFT](https://huggingface.co/d
|
|
33 |
## Model Details
|
34 |
|
35 |
The core models released in this batch are the following:
|
36 |
-
| Size | Training Tokens |
|
37 |
-
|
38 |
| [GRAG-Phi-CPT](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-CPT-HESSIAN-AI) | 507.47 million |
|
39 |
| [GRAG-Phi-SFT](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI) | 2.03 billion |
|
40 |
| [GRAG-Phi-ORPO](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-ORPO-HESSIAN-AI) | 2.0577 billion |
|
|
|
33 |
## Model Details
|
34 |
|
35 |
The core models released in this batch are the following:
|
36 |
+
| Size | Training Tokens |
|
37 |
+
|------|--------|
|
38 |
| [GRAG-Phi-CPT](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-CPT-HESSIAN-AI) | 507.47 million |
|
39 |
| [GRAG-Phi-SFT](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI) | 2.03 billion |
|
40 |
| [GRAG-Phi-ORPO](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-ORPO-HESSIAN-AI) | 2.0577 billion |
|