Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Spaces:
lxe
/
simple-llm-finetuner
Runtime error

App Files Files Community
7
Fetching metadata from the HF Docker repository...
simple-llm-finetuner
Ctrl+K
Ctrl+K
  • 4 contributors
History: 39 commits
lxe's picture
lxe
Added duplication instructions
d3bfed8 about 2 years ago
  • example-datasets
    Changed device_map to force GPU, see #6, https://github.com/tloen/alpaca-lora/issues/21 about 2 years ago
  • .gitattributes
    1.48 kB
    Add .gitattributes for spaces about 2 years ago
  • .gitignore
    101 Bytes
    Refactor; fix model/lora loading/reloading in inference. Fixes #10, #6 about 2 years ago
  • Inference.ipynb
    4.7 kB
    Refactor; fix model/lora loading/reloading in inference. Fixes #10, #6 about 2 years ago
  • README.md
    4.62 kB
    Clarify that 16GB VRAM in itself is enough (#21) about 2 years ago
  • Simple_LLaMA_FineTuner.ipynb
    21.4 kB
    Added ipynb and another example about 2 years ago
  • main.py
    18 kB
    Added duplication instructions about 2 years ago
  • requirements.txt
    158 Bytes
    Update requirements.txt about 2 years ago