phi-2-instruct
This model is a fine-tuned version of microsoft/phi-2 on the filtered ultrachat200k dataset using the SFT technique.
Model description
More information about the model architecture and specific modifications made during fine-tuning is needed.
Intended uses & limitations
More information about the intended use cases and any limitations of the model is needed.
Training and evaluation data
More information about the datasets used for training and evaluation is needed.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 51967
Training results
Detailed training results and performance metrics are not provided. It's recommended to reach out to the model creator for more information.
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
Evaluation and Inference Example
- For an evaluation of the model and an inference example, refer to the Inference Notebook.
Full Training Metrics on TensorBoard
View the full training metrics on TensorBoard here.
Author's LinkedIn Profile
- Downloads last month
- 27
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support text-generation models for adapter-transformers library.
Model tree for venkycs/phi-2-instruct
Base model
microsoft/phi-2