YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

pythia160m-finetuned-alpaca

Model Description

This model is a fine-tuned version of EleutherAI/pythia-160m trained on instruction-following data.

Training Details

  • Base Model: EleutherAI/pythia-160m
  • Training Data: Alpaca dataset
  • Training Parameters:
    • Learning Rate: 2e-05
    • Batch Size: 1
    • Epochs: 1
Downloads last month
5
Safetensors
Model size
162M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.