Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Spaces:
lxe
/
simple-llm-finetuner
Runtime error

App Files Files Community
7
Fetching metadata from the HF Docker repository...
simple-llm-finetuner
Ctrl+K
Ctrl+K
  • 4 contributors
History: 38 commits
VadimP's picture
VadimP
Clarify that 16GB VRAM in itself is enough (#21)
bc97c87 unverified over 2 years ago
  • example-datasets
    Changed device_map to force GPU, see #6, https://github.com/tloen/alpaca-lora/issues/21 over 2 years ago
  • .gitattributes
    1.48 kB
    Add .gitattributes for spaces over 2 years ago
  • .gitignore
    101 Bytes
    Refactor; fix model/lora loading/reloading in inference. Fixes #10, #6 over 2 years ago
  • Inference.ipynb
    4.7 kB
    Refactor; fix model/lora loading/reloading in inference. Fixes #10, #6 over 2 years ago
  • README.md
    4.62 kB
    Clarify that 16GB VRAM in itself is enough (#21) over 2 years ago
  • Simple_LLaMA_FineTuner.ipynb
    21.4 kB
    Added ipynb and another example over 2 years ago
  • main.py
    17.1 kB
    Added huggingface spaces stuff over 2 years ago
  • requirements.txt
    158 Bytes
    Update requirements.txt over 2 years ago