To run the autotrain cli locally or in colab, install autotrain-advanced python package:
$ pip install autotrain-advanced
and then run the following command:
$ export HF_TOKEN=your_hugging_face_write_token
$ autotrain --helpThis will start the app on http://127.0.0.1:8000.
AutoTrain doesn’t install pytorch, torchaudio, torchvision, or any other dependencies. You will need to install them separately. It is thus recommended to use conda environment:
$ conda create -n autotrain python=3.10
$ conda activate autotrain
$ pip install autotrain-advanced
$ conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
$ conda install -c "nvidia/label/cuda-12.1.0" cuda-nvcc
$ conda install xformers -c xformers
$ python -m nltk.downloader punkt
$ pip install flash-attn --no-build-isolation
$ pip install deepspeed
$ export HF_TOKEN=your_hugging_face_write_token
$ autotrain --helpThis will show the CLI commands that can be used:
$ autotrain --help
usage: autotrain <command> [<args>]
positional arguments:
{
app,
llm,
setup,
dreambooth,
api,
text-classification,
image-classification,
tabular,
spacerunner,
seq2seq,
token-classification
}
commands
options:
-h, --help show this help message and exit
--version, -v Display AutoTrain version
For more information about a command, run: `autotrain <command> --help`The autotrain commands that end users will be interested in are:
app: Start the AutoTrain UIllm: Train a language modeldreambooth: Train a model using DreamBoothtext-classification: Train a text classification modelimage-classification: Train an image classification modeltabular: Train a tabular modelspacerunner: Train any custom model using SpaceRunnerseq2seq: Train a sequence-to-sequence modeltoken-classification: Train a token classification modelIn case of any issues, please report on the GitHub issues.
< > Update on GitHub