The from_single_file() method allows you to load:
Read the Model files and layouts guide to learn more about the Diffusers-multifolder layout versus the single-file layout, and how to load models stored in these different layouts.
StableCascadeUNetLoad model weights saved in the .ckpt format into a DiffusionPipeline.
( pretrained_model_link_or_path **kwargs )
Parameters
str or os.PathLike, optional) —
Can be either:
.ckpt file (for example
"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt") on the Hub.str or torch.dtype, optional) —
Override the default torch.dtype and load the model with another dtype. bool, optional, defaults to False) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Union[str, os.PathLike], optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. bool, optional, defaults to False) —
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. str, optional) —
The path to the original config file that was used to train the model. If not provided, the config file
will be inferred from the checkpoint file. str, optional) —
Can be either:
CompVis/ldm-text2im-large-256) of a pretrained pipeline
hosted on the Hub../my_pipeline_directory/) containing the pipeline
component configs in Diffusers format.__init__ method. See example
below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors
format. The pipeline is set in evaluation mode (model.eval()) by default.
Examples:
>>> from diffusers import StableDiffusionPipeline
>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = StableDiffusionPipeline.from_single_file(
... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors"
... )
>>> # Download pipeline from local file
>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt
>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly.ckpt")
>>> # Enable float16 and move to GPU
>>> pipeline = StableDiffusionPipeline.from_single_file(
... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt",
... torch_dtype=torch.float16,
... )
>>> pipeline.to("cuda")Load pretrained weights saved in the .ckpt or .safetensors format into a model.
( pretrained_model_link_or_path_or_dict: typing.Optional[str] = None **kwargs )
Parameters
str, optional) —
Can be either:
.safetensors or .ckpt file (for example
"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.safetensors") on the Hub.str, optional) —
CompVis/ldm-text2im-large-256) of a pretrained pipeline hosted
on the Hub../my_pipeline_directory/) containing the pipeline component
configs in Diffusers format.str, optional, defaults to "") —
The subfolder location of a model file within a larger model repository on the Hub or locally. str, optional) —
Dict or path to a yaml file containing the configuration for the model in its original format.
If a dict is provided, it will be used to initialize the model configuration. str or torch.dtype, optional) —
Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the
dtype is automatically derived from the model’s weights. bool, optional, defaults to False) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Union[str, os.PathLike], optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. bool, optional, defaults to False) —
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. __init__
method. See example below for more information. Instantiate a model from pretrained weights saved in the original .ckpt or .safetensors format. The model
is set in evaluation mode (model.eval()) by default.