modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-24 12:28:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-24 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hannahbillo/distilbert-base-uncased-finetuned-ner | hannahbillo | 2023-05-04T20:30:35Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-04-21T11:58:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9264624571491762
- name: Recall
type: recall
value: 0.9372413021590782
- name: F1
type: f1
value: 0.9318207095984874
- name: Accuracy
type: accuracy
value: 0.9840024147298521
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0621
- Precision: 0.9265
- Recall: 0.9372
- F1: 0.9318
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 439 | 0.0751 | 0.8976 | 0.9103 | 0.9039 | 0.9789 |
| 0.219 | 2.0 | 878 | 0.0626 | 0.9130 | 0.9284 | 0.9206 | 0.9817 |
| 0.0558 | 3.0 | 1317 | 0.0623 | 0.9195 | 0.9332 | 0.9263 | 0.9826 |
| 0.0321 | 4.0 | 1756 | 0.0610 | 0.9251 | 0.9359 | 0.9305 | 0.9835 |
| 0.0228 | 5.0 | 2195 | 0.0621 | 0.9265 | 0.9372 | 0.9318 | 0.9840 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dgalik/distilbert-finetuning-hate-speech-score-all-samples-dropout005-epochs-10 | dgalik | 2023-05-04T20:15:07Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-05-04T15:06:26Z | ---
tags:
- generated_from_trainer
model-index:
- name: distilbert-finetuning-hate-speech-score-all-samples-dropout005-epochs-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuning-hate-speech-score-all-samples-dropout005-epochs-10
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3577
- Mse: 0.3577
- Rmse: 0.5981
- Mae: 0.2583
- R2: 0.9370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rodekruis/sml-ukr-word-classifier-medium | rodekruis | 2023-05-04T20:06:42Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-04T20:06:20Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# rodekruis/sml-ukr-word-classifier-medium
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("rodekruis/sml-ukr-word-classifier-medium")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
tsumeone/wizard-vicuna-13b-4bit-128g-cuda | tsumeone | 2023-05-04T19:53:17Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-04T19:21:51Z | ---
library_name: transformers
pipeline_tag: text-generation
---
Quant of https://huggingface.co/junelee/wizard-vicuna-13b tested working with Occam's KoboldAI/GPTQ.
Someone made a Triton quant already here, but it will not work with Occam's KoboldAI/GPTQ fork: https://huggingface.co/fbjr/wizard-vicuna-13b-4bit-128g
Note that this model is fairly heavily censored (in my opinion) and delivers AI-moralizing responses to prompts that Vicuna 1.1 does not complain about.
```python llama.py ./wizard-vicuna-13b c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors 4bit-128g.safetensors``` |
abdullahalzubaer/llama-7b-lora-sst2 | abdullahalzubaer | 2023-05-04T19:30:15Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-02T15:29:45Z | This is where you write how to use? Test!
Okay it works :)
## TODO
* [ ] Minimal working code
* [ ] Dataset description |
reginaboateng/umls_relational_extraction_adapter_clinical_bert | reginaboateng | 2023-05-04T19:25:54Z | 1 | 1 | adapter-transformers | [
"adapter-transformers",
"adapterhub:umls",
"bert",
"dataset:umls",
"region:us"
] | null | 2023-05-04T19:25:48Z | ---
tags:
- adapterhub:umls
- adapter-transformers
- bert
datasets:
- umls
---
# Adapter `reginaboateng/umls_relational_extraction_adapter_clinical_bert` for emilyalsentzer/Bio_ClinicalBERT
An [adapter](https://adapterhub.ml) for the `emilyalsentzer/Bio_ClinicalBERT` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
adapter_name = model.load_adapter("reginaboateng/umls_relational_extraction_adapter_clinical_bert", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
ageng-anugrah/indobert-large-p2-finetuned-ner | ageng-anugrah | 2023-05-04T19:09:10Z | 163 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"indobert",
"indobenchmark",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-04-05T09:00:46Z | ---
language: id
tags:
- indobert
- indobenchmark
---
## How to use
### Load model and tokenizer
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("ageng-anugrah/indobert-large-p2-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("ageng-anugrah/indobert-large-p2-finetuned-ner")
```
### Extract NER Tag
```python
import torch
def predict(model, tokenizer, sentence):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = tokenizer(sentence.split(),
is_split_into_words = True,
return_offsets_mapping=True,
return_tensors="pt",
padding='max_length',
truncation=True,
max_length=512)
model.to(device)
# move to gpu
ids = inputs["input_ids"].to(device)
mask = inputs["attention_mask"].to(device)
# forward pass
outputs = model(ids, attention_mask=mask)
logits = outputs[0]
active_logits = logits.view(-1, model.num_labels) # shape (batch_size * seq_len, num_labels)
flattened_predictions = torch.argmax(active_logits, axis=1) # shape (batch_size*seq_len,) - predictions at the token level
tokens = tokenizer.convert_ids_to_tokens(ids.squeeze().tolist())
token_predictions = [model.config.id2label[i] for i in flattened_predictions.cpu().numpy()]
wp_preds = list(zip(tokens, token_predictions)) # list of tuples. Each tuple = (wordpiece, prediction)
prediction = []
for token_pred, mapping in zip(wp_preds, inputs["offset_mapping"].squeeze().tolist()):
#only predictions on first word pieces are important
if mapping[0] == 0 and mapping[1] != 0:
prediction.append(token_pred[1])
else:
continue
return sentence.split(), prediction
sentence = "BJ Habibie adalah Presiden Indonesia ke-3"
words, labels = predict(model, tokenizer, sentence)
``` |
BartekSadlej/ppo-LunarLander-v2 | BartekSadlej | 2023-05-04T19:06:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-04T19:05:37Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.77 +/- 17.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TehVenom/Metharme-7b-Merged-Safetensors | TehVenom | 2023-05-04T18:53:23Z | 17 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-04-30T02:36:36Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Metharme 7B</h1>
<h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
## Model Details:
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/metharme-7b
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
It was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached.
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
### Prompting example
Here's a concrete example. Let's say you have the following system prompt:
> This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.
And the user has typed in the following message:
> Start!
To perform inference against the model, the prompt should be formatted like this:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
```
The model might generate something like:
> You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
>
> - Go inside the house.
> - Ignore the call and move away.
> - Run as fast as you can.
It will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
Pretend then that the user has replied with `go inside`. The next prompt should then be:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
- Go inside the house.
- Ignore the call and move away.
- Run as fast as you can.<|user|>go inside<|model|>
```
Which might generate something like:
> You enter the house, and immediately, a wave of unsettling energy washes over you. Your senses seem heightened, but your mind feels foggy. A figure appears before you - a tall man with piercing eyes. He speaks to you in a language you don't understand.
>
> - Ask him what he wants.
> - Attack him.
> - Run away.
Same process applies. Usually, it is best to do a sliding window over the user and model turns, but keep the system prompt fixed at the start of the context window.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
TehVenom/Pygmalion_AlpacaLora-7b | TehVenom | 2023-05-04T18:53:07Z | 1,520 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-30T22:52:52Z | The LLaMA based Pygmalion-7b model:
https://huggingface.co/PygmalionAI/pygmalion-7b
Merged alongside Tloen's Alpaca LoRA:
https://huggingface.co/tloen/alpaca-lora-7b
This was done to test whether LoRAs trained for other LLaMA fine tunes worked for Pygmalion,
and to have it available on inference backends that do not support LoRAs just yet.
Treat this as a normal HF Transformers model. |
TehVenom/Metharme-7b-4bit-GPTQ-Safetensors | TehVenom | 2023-05-04T18:52:51Z | 8 | 4 | transformers | [
"transformers",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-04-30T09:41:35Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Metharme 7B</h1>
<h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
## Model Details:
This models has the XOR files pre-applied out of the box.
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/Metharme-7b
It has also been quantized down to 4Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
```
python llama.py .\Metharme-7b-Merged-Safetensors c4 --wbits 4 --act-order --save_safetensors Metharme-7B-GPTQ-4bit.act-order.safetensors
```
This is the best eval i could get after trying many argument combinations, by converting the model from bf16 to fp32, before quantizing down to 4bit with `--act-order` as the sole argument.
- Wikitext 2: 6.2369050979614
- PTB-New: 47.5177230834960
- C4-New: 7.9044938087463
---
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
It was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached.
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
### Prompting example
Here's a concrete example. Let's say you have the following system prompt:
> This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.
And the user has typed in the following message:
> Start!
To perform inference against the model, the prompt should be formatted like this:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
```
The model might generate something like:
> You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
>
> - Go inside the house.
> - Ignore the call and move away.
> - Run as fast as you can.
It will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
Pretend then that the user has replied with `go inside`. The next prompt should then be:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
- Go inside the house.
- Ignore the call and move away.
- Run as fast as you can.<|user|>go inside<|model|>
```
Which might generate something like:
> You enter the house, and immediately, a wave of unsettling energy washes over you. Your senses seem heightened, but your mind feels foggy. A figure appears before you - a tall man with piercing eyes. He speaks to you in a language you don't understand.
>
> - Ask him what he wants.
> - Attack him.
> - Run away.
Same process applies. Usually, it is best to do a sliding window over the user and model turns, but keep the system prompt fixed at the start of the context window.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
TehVenom/Pygmalion-7b-4bit-Q4_1-GGML | TehVenom | 2023-05-04T18:52:23Z | 0 | 19 | null | [
"text generation",
"conversational",
"text-generation",
"en",
"region:us"
] | text-generation | 2023-04-30T08:18:26Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- conversational
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Pygmalion 7B</h1>
<h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
> Currently KoboldCPP is unable to stop inference when an EOS token is emitted, which causes the model to devolve into gibberish,
>
> Pygmalion 7B is now fixed on the dev branch of KoboldCPP, which has fixed the EOS issue. Make sure you're compiling the latest version, it was fixed only a after this model was released;
>
> When running KoboldCPP, you will need to add the --unbantokens flag for this model to behave properly.
## Model Details:
Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B.
This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project.
## Applying the XORs & Quantizing
This models has the XOR files pre-applied out of the box.
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-7b
It has also been quantized down to 4Bit - Q4_1 using the GGML library available here: https://github.com/LostRuins/koboldcpp
## Prompting
The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [User's input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example:
```
Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests.
<START>
Assistant: Hello! How may I help you today?
You: What is Zork?
Assistant:
```
Which will generate something like:
```
Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years."
```
The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
## Limitations and biases
The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
lllyasviel/control_v11f1e_sd15_tile | lllyasviel | 2023-05-04T18:51:13Z | 14,272 | 107 | diffusers | [
"diffusers",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-05-04T18:42:24Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11f1e_sd15_tile
---
# Controlnet - v1.1 - *Tile Version*
**Controlnet v1.1** was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **tiled image**. Conceptually, it is similar to a super-resolution model, but its usage is not limited to that. It is also possible to generate details at the same size as the input (conditione) image.
**This model was contributed by [*takuma104*](https://huggingface.co/takuma104)**
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
1. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
2. Run code:
```python
import torch
from PIL import Image
from diffusers import ControlNetModel, DiffusionPipeline
from diffusers.utils import load_image
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
controlnet = ControlNetModel.from_pretrained('lllyasviel/control_v11f1e_sd15_tile',
torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",
custom_pipeline="stable_diffusion_controlnet_img2img",
controlnet=controlnet,
torch_dtype=torch.float16).to('cuda')
pipe.enable_xformers_memory_efficient_attention()
source_image = load_image('https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png')
condition_image = resize_for_condition_image(source_image, 1024)
image = pipe(prompt="best quality",
negative_prompt="blur, lowres, bad anatomy, bad hands, cropped, worst quality",
image=condition_image,
controlnet_conditioning_image=condition_image,
width=condition_image.size[0],
height=condition_image.size[1],
strength=1.0,
generator=torch.manual_seed(0),
num_inference_steps=32,
).images[0]
image.save('output.png')
```


## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
lllyasviel/control_v11e_sd15_shuffle | lllyasviel | 2023-05-04T18:51:03Z | 6,588 | 16 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-04-14T19:25:23Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11e_sd15_shuffle
---
# Controlnet - v1.1 - *shuffle Version*
**Controlnet v1.1** was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11e_sd15_shuffle.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **shuffle images**.
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Install https://github.com/patrickvonplaten/controlnet_aux
```sh
$ pip install controlnet_aux==0.3.0
```
2. Let's install `diffusers` and related packages:
**IMPORTANT:** Make sure that you have `diffusers.__version__ >= 0.16.0.dev0` installed!
```
$ pip install git+https://github.com/huggingface/diffusers.git transformers accelerate
```
3. Run code:
```python
import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from controlnet_aux import ContentShuffleDetector
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = "lllyasviel/control_v11e_sd15_shuffle"
image = load_image(
"https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/input.png"
)
prompt = "New York"
processor = ContentShuffleDetector()
control_image = processor(image)
control_image.save("./images/control.png")
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(33)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png')
```



## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
lllyasviel/control_v11p_sd15_softedge | lllyasviel | 2023-05-04T18:50:55Z | 13,182 | 17 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-04-14T19:24:54Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11p_sd15_softedge
---
# Controlnet - v1.1 - *Soft Edge Version*
**Controlnet v1.1** is the successor model of [Controlnet v1.0](https://huggingface.co/lllyasviel/ControlNet)
and was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_softedge.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **Soft edges**.
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Install https://github.com/patrickvonplaten/controlnet_aux
```sh
$ pip install controlnet_aux==0.3.0
```
2. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
3. Run code:
```python
import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from controlnet_aux import PidiNetDetector, HEDdetector
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = "lllyasviel/control_v11p_sd15_softedge"
image = load_image(
"https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/input.png"
)
prompt = "royal chamber with fancy bed"
processor = HEDdetector.from_pretrained('lllyasviel/Annotators')
processor = PidiNetDetector.from_pretrained('lllyasviel/Annotators')
control_image = processor(image, safe=True)
control_image.save("./images/control.png")
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png')
```



## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
## Improvements in Soft Edge 1.1:
- Soft Edge 1.1 was called HED 1.0 in previous ControlNet.
- The training dataset of previous cnet 1.0 has several problems including (1) a small group of greyscale human images are duplicated thousands of times (!!), causing the previous model somewhat likely to generate grayscale human images; (2) some images has low quality, very blurry, or significant JPEG artifacts; (3) a small group of images has wrong paired prompts caused by a mistake in our data processing scripts. The new model fixed all problems of the training dataset and should be more reasonable in many cases.
- The Soft Edge 1.1 is significantly (in nealy 100% cases) better than HED 1.0. This is mainly because HED or PIDI estimator tend to hide a corrupted greyscale version of original image inside the soft edge map and the previous model HED 1.0 is over-fitted to restore that hidden corrupted image rather than perform boundary-aware diffusion. The training of Soft Edge 1.1 used 75% "safe" filtering to remove such hidden corrupted greyscale images insider control maps. This makes the Soft Edge 1.1 very robust. In out test, Soft Edge 1.1 is as usable as the depth model and has potential to be more frequently used.
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
lllyasviel/control_v11p_sd15s2_lineart_anime | lllyasviel | 2023-05-04T18:49:50Z | 6,984 | 29 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-04-14T19:25:03Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11p_sd15s2_lineart_anime
---
# Controlnet - v1.1 - *lineart_anime Version*
**Controlnet v1.1** was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15s2_lineart_anime.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **lineart_anime images**.
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Install https://github.com/patrickvonplaten/controlnet_aux
```sh
$ pip install controlnet_aux==0.3.0
```
2. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
3. Run code:
```python
import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from controlnet_aux import LineartAnimeDetector
from transformers import CLIPTextModel
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = "lllyasviel/control_v11p_sd15s2_lineart_anime"
image = load_image(
"https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/input.png"
)
image = image.resize((512, 512))
prompt = "A warrior girl in the jungle"
processor = LineartAnimeDetector.from_pretrained("lllyasviel/Annotators")
control_image = processor(image)
control_image.save("./images/control.png")
# we skip one layer of the encoder
text_encoder = CLIPTextModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="text_encoder", num_hidden_layers=11, torch_dtype=torch.float16)
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", text_encoder=text_encoder, controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png')
```



## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
lllyasviel/control_v11p_sd15_seg | lllyasviel | 2023-05-04T18:49:33Z | 6,810 | 12 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-04-14T19:23:48Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11p_sd15_seg
---
# Controlnet - v1.1 - *seg Version*
**Controlnet v1.1** is the successor model of [Controlnet v1.0](https://huggingface.co/lllyasviel/ControlNet)
and was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_seg.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **seg images**.
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
2. Let's define a color table we'll need later.
```py
import numpy as np
ada_palette = np.asarray([
[0, 0, 0],
[120, 120, 120],
[180, 120, 120],
[6, 230, 230],
[80, 50, 50],
[4, 200, 3],
[120, 120, 80],
[140, 140, 140],
[204, 5, 255],
[230, 230, 230],
[4, 250, 7],
[224, 5, 255],
[235, 255, 7],
[150, 5, 61],
[120, 120, 70],
[8, 255, 51],
[255, 6, 82],
[143, 255, 140],
[204, 255, 4],
[255, 51, 7],
[204, 70, 3],
[0, 102, 200],
[61, 230, 250],
[255, 6, 51],
[11, 102, 255],
[255, 7, 71],
[255, 9, 224],
[9, 7, 230],
[220, 220, 220],
[255, 9, 92],
[112, 9, 255],
[8, 255, 214],
[7, 255, 224],
[255, 184, 6],
[10, 255, 71],
[255, 41, 10],
[7, 255, 255],
[224, 255, 8],
[102, 8, 255],
[255, 61, 6],
[255, 194, 7],
[255, 122, 8],
[0, 255, 20],
[255, 8, 41],
[255, 5, 153],
[6, 51, 255],
[235, 12, 255],
[160, 150, 20],
[0, 163, 255],
[140, 140, 140],
[250, 10, 15],
[20, 255, 0],
[31, 255, 0],
[255, 31, 0],
[255, 224, 0],
[153, 255, 0],
[0, 0, 255],
[255, 71, 0],
[0, 235, 255],
[0, 173, 255],
[31, 0, 255],
[11, 200, 200],
[255, 82, 0],
[0, 255, 245],
[0, 61, 255],
[0, 255, 112],
[0, 255, 133],
[255, 0, 0],
[255, 163, 0],
[255, 102, 0],
[194, 255, 0],
[0, 143, 255],
[51, 255, 0],
[0, 82, 255],
[0, 255, 41],
[0, 255, 173],
[10, 0, 255],
[173, 255, 0],
[0, 255, 153],
[255, 92, 0],
[255, 0, 255],
[255, 0, 245],
[255, 0, 102],
[255, 173, 0],
[255, 0, 20],
[255, 184, 184],
[0, 31, 255],
[0, 255, 61],
[0, 71, 255],
[255, 0, 204],
[0, 255, 194],
[0, 255, 82],
[0, 10, 255],
[0, 112, 255],
[51, 0, 255],
[0, 194, 255],
[0, 122, 255],
[0, 255, 163],
[255, 153, 0],
[0, 255, 10],
[255, 112, 0],
[143, 255, 0],
[82, 0, 255],
[163, 255, 0],
[255, 235, 0],
[8, 184, 170],
[133, 0, 255],
[0, 255, 92],
[184, 0, 255],
[255, 0, 31],
[0, 184, 255],
[0, 214, 255],
[255, 0, 112],
[92, 255, 0],
[0, 224, 255],
[112, 224, 255],
[70, 184, 160],
[163, 0, 255],
[153, 0, 255],
[71, 255, 0],
[255, 0, 163],
[255, 204, 0],
[255, 0, 143],
[0, 255, 235],
[133, 255, 0],
[255, 0, 235],
[245, 0, 255],
[255, 0, 122],
[255, 245, 0],
[10, 190, 212],
[214, 255, 0],
[0, 204, 255],
[20, 0, 255],
[255, 255, 0],
[0, 153, 255],
[0, 41, 255],
[0, 255, 204],
[41, 0, 255],
[41, 255, 0],
[173, 0, 255],
[0, 245, 255],
[71, 0, 255],
[122, 0, 255],
[0, 255, 184],
[0, 92, 255],
[184, 255, 0],
[0, 133, 255],
[255, 214, 0],
[25, 194, 194],
[102, 255, 0],
[92, 0, 255],
])
```
3. Run code:
```python
import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from transformers import AutoImageProcessor, UperNetForSemanticSegmentation
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small")
image_segmentor = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-small")
checkpoint = "lllyasviel/control_v11p_sd15_seg"
image = load_image(
"https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/input.png"
)
prompt = "old house in stormy weather with rain and wind"
pixel_values = image_processor(image, return_tensors="pt").pixel_values
with torch.no_grad():
outputs = image_segmentor(pixel_values)
seg = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) # height, width, 3
for label, color in enumerate(ada_palette):
color_seg[seg == label, :] = color
color_seg = color_seg.astype(np.uint8)
control_image = Image.fromarray(color_seg)
control_image.save("./images/control.png")
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png')
```



## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
## Improvements in Segmentation 1.1:
- COCO protocol is supported. The previous Segmentation 1.0 supports about 150 colors, but Segmentation 1.1 supports another 182 colors from coco.
- Resumed from Segmentation 1.0. All previous inputs should still work.
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
frankjoshua/control_v11f1p_sd15_depth | frankjoshua | 2023-05-04T18:49:15Z | 106 | 1 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-07-27T23:38:26Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11p_sd15_depth
---
# Controlnet - v1.1 - *depth Version*
**Controlnet v1.1** is the successor model of [Controlnet v1.0](https://huggingface.co/lllyasviel/ControlNet)
and was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1p_sd15_depth.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **depth images**.
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, depthmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
3. Run code:
```python
import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from transformers import pipeline
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = "lllyasviel/control_v11p_sd15_depth"
image = load_image(
"https://huggingface.co/lllyasviel/control_v11p_sd15_depth/resolve/main/images/input.png"
)
prompt = "Stormtrooper's lecture in beautiful lecture hall"
depth_estimator = pipeline('depth-estimation')
image = depth_estimator(image)['depth']
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
control_image = Image.fromarray(image)
control_image.save("./images/control.png")
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png')
```



## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
## Improvements in Depth 1.1:
- The training dataset of previous cnet 1.0 has several problems including (1) a small group of greyscale human images are duplicated thousands of times (!!), causing the previous model somewhat likely to generate grayscale human images; (2) some images has low quality, very blurry, or significant JPEG artifacts; (3) a small group of images has wrong paired prompts caused by a mistake in our data processing scripts. The new model fixed all problems of the training dataset and should be more reasonable in many cases.
- The new depth model is a relatively unbiased model. It is not trained with some specific type of depth by some specific depth estimation method. It is not over-fitted to one preprocessor. This means this model will work better with different depth estimation, different preprocessor resolutions, or even with real depth created by 3D engines.
- Some reasonable data augmentations are applied to training, like random left-right flipping.
- The model is resumed from depth 1.0, and it should work well in all cases where depth 1.0 works well. If not, please open an issue with image, and we will take a look at your case. Depth 1.1 works well in many failure cases of depth 1.0.
- If you use Midas depth (the "depth" in webui plugin) with 384 preprocessor resolution, the difference between depth 1.0 and 1.1 should be minimal. However, if you try other preprocessor resolutions or other preprocessors (like leres and zoe), the depth 1.1 is expected to be a bit better than 1.0.
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
lllyasviel/control_v11p_sd15_canny | lllyasviel | 2023-05-04T18:48:49Z | 14,572 | 40 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-04-14T19:24:43Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11p_sd15_canny
---
# Controlnet - v1.1 - *Canny Version*
**Controlnet v1.1** is the successor model of [Controlnet v1.0](https://huggingface.co/lllyasviel/sd-controlnet-canny)
and was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **Canny edges**.
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Install [opencv](https://opencv.org/):
```sh
$ pip install opencv-contrib-python
```
2. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
3. Run code:
```python
import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
import numpy as np
import cv2
from PIL import Image
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = "lllyasviel/control_v11p_sd15_canny"
image = load_image(
"https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/input.png"
)
image = np.array(image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
control_image = Image.fromarray(image)
control_image.save("./images/control.png")
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(33)
image = pipe("a blue paradise bird in the jungle", num_inference_steps=20, generator=generator, image=control_image).images[0]
image.save('images/image_out.png')
```



## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
## Improvements in Canny 1.1:
- The training dataset of previous cnet 1.0 has several problems including (1) a small group of greyscale human images are duplicated thousands of times (!!), causing the previous model somewhat likely to generate grayscale human images; (2) some images has low quality, very blurry, or significant JPEG artifacts; (3) a small group of images has wrong paired prompts caused by a mistake in our data processing scripts. The new model fixed all problems of the training dataset and should be more reasonable in many cases.
- Because the Canny model is one of the most important (perhaps the most frequently used) ControlNet, we used a fund to train it on a machine with 8 Nvidia A100 80G with batchsize 8×32=256 for 3 days, spending 72×30=2160 USD (8 A100 80G with 30 USD/hour). The model is resumed from Canny 1.0.
- Some reasonable data augmentations are applied to training, like random left-right flipping.
- Although it is difficult to evaluate a ControlNet, we find Canny 1.1 is a bit more robust and a bit higher visual quality than Canny 1.0.
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
ControlNet-1-1-preview/control_v11f1e_sd15_tile | ControlNet-1-1-preview | 2023-05-04T18:37:33Z | 89 | 11 | diffusers | [
"diffusers",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-05-04T17:56:56Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11f1e_sd15_tile
---
# Controlnet - v1.1 - *Tile Version*
**Controlnet v1.1** was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **tiled image**. Conceptually, it is similar to a super-resolution model, but its usage is not limited to that. It is also possible to generate details at the same size as the input (conditione) image.
**This model was contributed by [*takuma104*](https://huggingface.co/takuma104)**
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
1. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
2. Run code:
```python
import torch
from PIL import Image
from diffusers import ControlNetModel, DiffusionPipeline
from diffusers.utils import load_image
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
controlnet = ControlNetModel.from_pretrained('lllyasviel/control_v11f1e_sd15_tile',
torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",
custom_pipeline="stable_diffusion_controlnet_img2img",
controlnet=controlnet,
torch_dtype=torch.float16).to('cuda')
pipe.enable_xformers_memory_efficient_attention()
source_image = load_image('https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png')
condition_image = resize_for_condition_image(source_image, 1024)
image = pipe(prompt="best quality",
negative_prompt="blur, lowres, bad anatomy, bad hands, cropped, worst quality",
image=condition_image,
controlnet_conditioning_image=condition_image,
width=condition_image.size[0],
height=condition_image.size[1],
strength=1.0,
generator=torch.manual_seed(0),
num_inference_steps=32,
).images[0]
image.save('output.png')
```


## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> Trained with image tiling | The base image for drawing details.|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/output.png"/></a>|
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
elaunlu/bert-base-uncased-finetuned-cola | elaunlu | 2023-05-04T18:27:57Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-02T15:02:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.518818601771926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4610
- Matthews Correlation: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4985 | 1.0 | 535 | 0.4610 | 0.5188 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
skierdude/pizza_parallel | skierdude | 2023-05-04T18:21:48Z | 195 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-04T18:21:42Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pizza_parallel
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6222222447395325
---
# pizza_parallel
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### skiiing skis pizza

#### skiing skis parallel
 |
cansurav/bert-base-uncased-finetuned-cola-learning_rate-2e-05 | cansurav | 2023-05-04T18:06:56Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-03T18:58:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-2e-05
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5892439733711194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-2e-05
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4480
- Matthews Correlation: 0.5892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5052 | 1.0 | 535 | 0.5532 | 0.5030 |
| 0.3006 | 2.0 | 1070 | 0.4480 | 0.5892 |
| 0.1918 | 3.0 | 1605 | 0.7164 | 0.5340 |
| 0.138 | 4.0 | 2140 | 0.8575 | 0.5570 |
| 0.0866 | 5.0 | 2675 | 1.1483 | 0.5211 |
| 0.0652 | 6.0 | 3210 | 0.9938 | 0.5816 |
| 0.046 | 7.0 | 3745 | 1.1453 | 0.5739 |
| 0.0314 | 8.0 | 4280 | 1.3524 | 0.5573 |
| 0.0212 | 9.0 | 4815 | 1.4664 | 0.5573 |
| 0.0203 | 10.0 | 5350 | 1.4505 | 0.5679 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnshulRustogi/bert-base-multilingual-cased1 | AnshulRustogi | 2023-05-04T18:02:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-04T04:57:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-multilingual-cased1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1335 | 1.0 | 1547 | 1.0084 |
| 0.7574 | 2.0 | 3094 | 0.9890 |
| 0.5211 | 3.0 | 4641 | 1.0139 |
| 0.335 | 4.0 | 6188 | 1.1702 |
| 0.2465 | 5.0 | 7735 | 1.3467 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
blackeys/ppo-Huggy | blackeys | 2023-05-04T17:58:36Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-05-04T17:58:29Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: blackeys/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
huggingtweets/raspberryl0ver | huggingtweets | 2023-05-04T17:52:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-10-25T18:26:40Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1647023471749111810/G5s5jf4-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🌞</div>
<div style="text-align: center; font-size: 14px;">@raspberryl0ver</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🌞.
| Data | 🌞 |
| --- | --- |
| Tweets downloaded | 2241 |
| Retweets | 457 |
| Short tweets | 291 |
| Tweets kept | 1493 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/btps6b16/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @raspberryl0ver's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/dojoofh6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/dojoofh6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/raspberryl0ver')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hnunez2007/root | hnunez2007 | 2023-05-04T17:24:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-04T17:24:47Z | ---
license: creativeml-openrail-m
---
|
DmitriyVasiliev/autotrain-mbart-rua-sent-dia-55462129227 | DmitriyVasiliev | 2023-05-04T17:10:53Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:DmitriyVasiliev/autotrain-data-mbart-rua-sent-dia",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-05-04T16:59:54Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- DmitriyVasiliev/autotrain-data-mbart-rua-sent-dia
co2_eq_emissions:
emissions: 4.360921475605774
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 55462129227
- CO2 Emissions (in grams): 4.3609
## Validation Metrics
- Loss: 1.615
- Rouge1: 5.000
- Rouge2: 1.917
- RougeL: 5.035
- RougeLsum: 4.980
- Gen Len: 32.397
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/DmitriyVasiliev/autotrain-mbart-rua-sent-dia-55462129227
``` |
sobabeats/civitai_mirror | sobabeats | 2023-05-04T16:52:54Z | 0 | 2 | null | [
"region:us"
] | null | 2023-05-04T16:52:20Z | ---
duplicated_from: anonderpling/civitai_mirror
---
|
Dzeniks/alberta_fact_checking | Dzeniks | 2023-05-04T16:48:11Z | 151 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"albert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-01-25T21:44:24Z | ---
license: mit
pipeline_tag: text-classification
---
# Alberta Fact Checking Model
The Alberta Fact Checking Model is a natural language processing model designed to classify claims as either supporting or refuting a given evidence. The model uses the ALBERT architecture and a tokenizer for text classification. It was trained on a dataset that primarily consisted of the FEVER, HOOVER, and FEVEROUS datasets, with a small sample of created data.
## Labels
The model returns two labels:
- 0 = Supports
- 1 = Refutes
## Input
The input to the model should be a claim accompanied by evidence.
## Usage
The Alberta Fact Checking Model can be used to classify claims based on the evidence provided.
```python
import torch
from transformers import AlbertTokenizer, AlbertForSequenceClassification
# Load the tokenizer and model
tokenizer = AlbertTokenizer.from_pretrained('Dzeniks/alberta_fact_checking')
model = AlbertForSequenceClassification.from_pretrained('Dzeniks/alberta_fact_checking')
# Define the claim with evidence to classify
claim = "Albert Einstein work in the field of computer science"
evidence = "Albert Einstein was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time."
# Tokenize the claim with evidence
x = tokenizer.encode_plus(claim, evidence, return_tensors="pt")
model.eval()
with torch.no_grad():
prediction = model(**x)
label = torch.argmax(outputs[0]).item()
print(f"Label: {label}")
```
## Disclaimer
While the alberta_fact_checking Model has been trained on a relatively large dataset and can provide accurate results, it may not always provide correct results. Users should always exercise caution when making decisions based on the output of any machine learning model.
|
ysige/xlm-roberta-base-finetuned-panx-all | ysige | 2023-05-04T16:46:17Z | 134 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-04T14:36:31Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1734
- F1: 0.8523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3058 | 1.0 | 835 | 0.1969 | 0.7993 |
| 0.1566 | 2.0 | 1670 | 0.1772 | 0.8387 |
| 0.1034 | 3.0 | 2505 | 0.1734 | 0.8523 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cpu
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Dzeniks/roberta-nei-fact-check | Dzeniks | 2023-05-04T16:31:26Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-02T19:37:22Z | ---
license: mit
pipeline_tag: text-classification
---
# roberta-nei-fact-check
This is a machine learning model trained for text classification using the Roberta architecture and a tokenizer. The purpose of this model is to identify whether a given claim with evidence contains enough information to make a fact-checking decision.
## Model Details
The model was trained using the Adam optimizer with a learning rate of 2-4e, an epsilon of 1-8, and a weight decay of 2-8e. The training data consisted mainly of the Fever and Hover datasets, with a small sample of created data. The model returns two labels:
- 0: Enough information
- 1: Not enough information
The model uses a tokenizer for text classification and requires input in the form of a claim with evidence. This means that the input should be a text string containing both the claim and the evidence to provide best result.
## Usage
To use this model, you can load it into your Python code using a library such as PyTorch or TensorFlow. You can then pass in a claim with evidence string and the model will return a label indicating whether there is enough information in the claim with evidence for fact-checking.
Here is an example of how to use the model in PyTorch:
```python
import torch
from transformers import RobertaTokenizer, RobertaForSequenceClassification
# Load the tokenizer and model
tokenizer = RobertaTokenizer.from_pretrained('Dzeniks/roberta-nei-fact-check')
model = RobertaForSequenceClassification.from_pretrained('Dzeniks/roberta-nei-fact-check')
# Define the claim with evidence to classify
claim = "Albert Einstein work in the field of computer science"
evidence = "Albert Einstein was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time."
# Tokenize the claim with evidence
x = tokenizer.encode_plus(claim, evidence, return_tensors="pt")
model.eval()
with torch.no_grad():
prediction = model(**x)
label = torch.argmax(outputs[0]).item()
print(f"Label: {label}")
```
In this example, the claim_with_evidence variable contains the claim with evidence to classify. The claim with evidence is tokenized using the tokenizer and converted to a tensor. The model is then used to classify the claim with evidence and the resulting label is printed to the console. |
DmitriyVasiliev/autotrain-mbart-rua-par-dia-55449129217 | DmitriyVasiliev | 2023-05-04T16:26:44Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:DmitriyVasiliev/autotrain-data-mbart-rua-par-dia",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-05-04T16:15:34Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- DmitriyVasiliev/autotrain-data-mbart-rua-par-dia
co2_eq_emissions:
emissions: 0.02249039216830347
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 55449129217
- CO2 Emissions (in grams): 0.0225
## Validation Metrics
- Loss: 1.611
- Rouge1: 4.813
- Rouge2: 1.824
- RougeL: 4.785
- RougeLsum: 4.785
- Gen Len: 33.108
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/DmitriyVasiliev/autotrain-mbart-rua-par-dia-55449129217
``` |
meltemtatli/bert-base-uncased-finetuned-cola-part2 | meltemtatli | 2023-05-04T16:23:45Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-04T16:00:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-part2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5726999708077573
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-part2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5136
- Matthews Correlation: 0.5727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.966102391464137e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.4343 | 0.5343 |
| 0.4076 | 2.0 | 536 | 0.4104 | 0.5934 |
| 0.4076 | 3.0 | 804 | 0.5136 | 0.5727 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aisquared/chopt-research-125m | aisquared | 2023-05-04T16:20:58Z | 143 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"dataset:tatsu-lab/alpaca",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-19T14:30:16Z | ---
license: other
datasets:
- tatsu-lab/alpaca
language:
- en
library_name: transformers
---
# Model Card for `chopt-research-125m`
<!-- Provide a quick summary of what the model is/does. -->
AI Squared's `chopt-research-125m` is a large language
model which is derived from Meta AI's Open Pre-trained Transformer language modelsand fine-tuned on a single GPU on a corpus of 50k records ([Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html)) to help it exhibit chat-based capabilities.
The ChOPT family of models from AI Squared are licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
While `chopt-research-125m` is **not a state-of-the-art model**, we believe that the level of interactivity that can be achieved on such a small model that is trained so cheaply is important to showcase, as it continues to demonstrate that creating powerful AI capabilities may be much more accessible than previously thought.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** AI Squared, Inc.
- **Shared by:** AI Squared, Inc.
- **Model type:** Large Language Model
- **Language(s) (NLP):** EN
- **License:** Other
- **Finetuned from model:** OPT
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
**`chopt-research-125m` is not a state-of-the-art language model.** `chopt-research-125m` is an experimental technology and is not designed for use in any
environment other than for research purposes. Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include,
but are not limited to: factual inaccuracies, biases, offensive responses, toxicity, and hallucinations.
Just as with any other LLM, we advise users of this technology to exercise good judgment when applying this technology.
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
From your terminal, run:
```python
pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/aisquared/chopt-research-125m/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
from transformers import pipeline
import torch
generate_text = pipeline(model="aisquared/chopt-research-125m", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Who was George Washington?")
print(res)
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/aisquared/chopt-research-125m/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("aisquared/chopt-research-125m", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("aisquared/chopt-research-125m", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### Model Performance Metrics
We present the results from various model benchmarks on the EleutherAI LLM Evaluation Harness for all models in the DLite family.
Model results are sorted by mean score, ascending, to provide an ordering. These metrics serve to further show that none of the DLite models are
state of the art, but rather further show that chat-like behaviors in LLMs can be trained almost independent of model size.
| Model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq |
|:--------------------|-------------:|-----------:|-------------:|------------:|----------------:|---------:|---------:|
| chopt-125m | 0.178 | 0.443182 | 0.501973 | 0.294165 | 0.197099 | 0.630577 | 0.476758 |
| chopt-research-125m | 0.17 | 0.436027 | 0.503552 | 0.294762 | 0.205631 | 0.62568 | 0.48685 |
| opt-125m | 0.166 | 0.435606 | 0.501973 | 0.291775 | 0.190273 | 0.6284 | 0.554434 |
| chopt-350m | 0.178 | 0.450758 | 0.508287 | 0.325334 | 0.21843 | 0.650707 | 0.559633 |
| opt_350m | 0.176 | 0.441077 | 0.52644 | 0.320056 | 0.207338 | 0.645267 | 0.57737 |
| chopt-research-350m | 0.172 | 0.462542 | 0.514601 | 0.327524 | 0.235495 | 0.643634 | 0.589908 |
| opt-1.3b | 0.234 | 0.569865 | 0.596685 | 0.414957 | 0.232935 | 0.718172 | 0.577676 |
| chopt-research-1_3b | 0.232 | 0.564815 | 0.59116 | 0.424716 | 0.276451 | 0.713275 | 0.634557 |
| chopt-1_3b | 0.236 | 0.569444 | 0.584057 | 0.42621 | 0.268771 | 0.723069 | 0.658104 |
| opt-2.7b | 0.25 | 0.608165 | 0.608524 | 0.458176 | 0.267918 | 0.738303 | 0.603058 |
| chopt-2_7b | 0.276 | 0.616582 | 0.601421 | 0.472615 | 0.288396 | 0.75136 | 0.552294 |
| chopt-research-2_7b | 0.262 | 0.610269 | 0.625099 | 0.458176 | 0.295222 | 0.742111 | 0.636697 | |
aisquared/chopt-research-2_7b | aisquared | 2023-05-04T16:20:34Z | 168 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"dataset:tatsu-lab/alpaca",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-24T19:41:29Z | ---
license: other
datasets:
- tatsu-lab/alpaca
language:
- en
library_name: transformers
---
# Model Card for `chopt-research-2_7b`
<!-- Provide a quick summary of what the model is/does. -->
AI Squared's `chopt-research-2_7b` is a large language
model which is derived from Meta AI's Open Pre-trained Transformer language modelsand fine-tuned on a single GPU on a corpus of 50k records ([Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html)) to help it exhibit chat-based capabilities.
The ChOPT family of models from AI Squared are licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
While `chopt-research-2_7b` is **not a state-of-the-art model**, we believe that the level of interactivity that can be achieved on such a small model that is trained so cheaply is important to showcase, as it continues to demonstrate that creating powerful AI capabilities may be much more accessible than previously thought.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** AI Squared, Inc.
- **Shared by:** AI Squared, Inc.
- **Model type:** Large Language Model
- **Language(s) (NLP):** EN
- **License:** Other
- **Finetuned from model:** OPT
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
**`chopt-research-2_7b` is not a state-of-the-art language model.** `chopt-research-2_7b` is an experimental technology and is not designed for use in any
environment other than for research purposes. Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include,
but are not limited to: factual inaccuracies, biases, offensive responses, toxicity, and hallucinations.
Just as with any other LLM, we advise users of this technology to exercise good judgment when applying this technology.
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
From your terminal, run:
```python
pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/aisquared/chopt-research-2_7b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
from transformers import pipeline
import torch
generate_text = pipeline(model="aisquared/chopt-research-2_7b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Who was George Washington?")
print(res)
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/aisquared/chopt-research-2_7b/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("aisquared/chopt-research-2_7b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("aisquared/chopt-research-2_7b", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### Model Performance Metrics
We present the results from various model benchmarks on the EleutherAI LLM Evaluation Harness for all models in the DLite family.
Model results are sorted by mean score, ascending, to provide an ordering. These metrics serve to further show that none of the DLite models are
state of the art, but rather further show that chat-like behaviors in LLMs can be trained almost independent of model size.
| Model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq |
|:--------------------|-------------:|-----------:|-------------:|------------:|----------------:|---------:|---------:|
| chopt-125m | 0.178 | 0.443182 | 0.501973 | 0.294165 | 0.197099 | 0.630577 | 0.476758 |
| chopt-research-125m | 0.17 | 0.436027 | 0.503552 | 0.294762 | 0.205631 | 0.62568 | 0.48685 |
| opt-125m | 0.166 | 0.435606 | 0.501973 | 0.291775 | 0.190273 | 0.6284 | 0.554434 |
| chopt-350m | 0.178 | 0.450758 | 0.508287 | 0.325334 | 0.21843 | 0.650707 | 0.559633 |
| opt_350m | 0.176 | 0.441077 | 0.52644 | 0.320056 | 0.207338 | 0.645267 | 0.57737 |
| chopt-research-350m | 0.172 | 0.462542 | 0.514601 | 0.327524 | 0.235495 | 0.643634 | 0.589908 |
| opt-1.3b | 0.234 | 0.569865 | 0.596685 | 0.414957 | 0.232935 | 0.718172 | 0.577676 |
| chopt-research-1_3b | 0.232 | 0.564815 | 0.59116 | 0.424716 | 0.276451 | 0.713275 | 0.634557 |
| chopt-1_3b | 0.236 | 0.569444 | 0.584057 | 0.42621 | 0.268771 | 0.723069 | 0.658104 |
| opt-2.7b | 0.25 | 0.608165 | 0.608524 | 0.458176 | 0.267918 | 0.738303 | 0.603058 |
| chopt-2_7b | 0.276 | 0.616582 | 0.601421 | 0.472615 | 0.288396 | 0.75136 | 0.552294 |
| chopt-research-2_7b | 0.262 | 0.610269 | 0.625099 | 0.458176 | 0.295222 | 0.742111 | 0.636697 | |
Dzeniks/justification-analyst | Dzeniks | 2023-05-04T16:16:16Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-24T15:17:14Z | ---
license: mit
pipeline_tag: text-classification
---
## Roberta for Justification analyst
This model is a fine-tuned version of the Roberta architecture that has been trained specifically for sequence classification. The fine-tuning process involved using the PyTorch deep learning framework and specific hyperparameters (2-4e, 1-8 epsilon) with Adagrad optimizer.
---
## Example Usage
To use the model, first load it in PyTorch:
```python
import torch
from transformers import RobertaForSequenceClassification, RobertaTokenizer
# Load the fine-tuned model
model = RobertaForSequenceClassification.from_pretrained('Dzeniks/justification-analyst')
# Load the tokenizer
tokenizer = RobertaTokenizer.from_pretrained('Dzeniks/justification-analyst')
# Tokenize the input sequence
input_text = "This is a sample input sequence"
input = tokenizer.encode_plus(claim, evidence, return_tensors="pt")
# Use the model to make a prediction
model.eval()
with torch.no_grad():
prediction = model(**x)
predictions = torch.argmax(outputs[0], dim=1).item()
```
## Classification Labels
The model was trained on a dataset consisting of claims and evidence, where the goal was to classify each claim as either supporting, refuting, or not having enough information to make a decision. The labels used for this task are as follows:
- Label 0: Supports
- Label 1: Refutes
- Label 2: Not enough information
|
HilbertS/Reinforce-PixelCopter5 | HilbertS | 2023-05-04T16:15:10Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-04T13:20:55Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter5
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.10 +/- 26.09
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Python-proje/mymodel | Python-proje | 2023-05-04T16:05:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-03T20:59:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mymodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mymodel
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3705
- Rouge1: 1.762
- Rouge2: 1.4938
- Rougel: 1.7366
- Rougelsum: 1.7385
- Gen Len: 19.7335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.446 | 1.0 | 12500 | 1.3705 | 1.762 | 1.4938 | 1.7366 | 1.7385 | 19.7335 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tekkithorse/GPT-J-6B-PNY-GGML | tekkithorse | 2023-05-04T15:58:07Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-04T00:46:59Z | GGML converted GPT-PNY model from https://huggingface.co/Amo/GPT-J-6B-PNY |
meghanaanil/bert-base-uncased-retrained-squad | meghanaanil | 2023-05-04T15:43:03Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-03T15:13:51Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-retrained-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-retrained-squad
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 127 | 2.8619 |
| No log | 2.0 | 254 | 2.8528 |
| No log | 3.0 | 381 | 3.0415 |
| 0.671 | 4.0 | 508 | 3.3311 |
| 0.671 | 5.0 | 635 | 3.3139 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
joelorellana/autotrain-rwlv_summarizer-55443129210 | joelorellana | 2023-05-04T15:38:19Z | 120 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:joel89/autotrain-data-rwlv_summarizer",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-05-04T15:34:31Z | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- joel89/autotrain-data-rwlv_summarizer
co2_eq_emissions:
emissions: 0.007272812398046086
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 55443129210
- CO2 Emissions (in grams): 0.0073
## Validation Metrics
- Loss: 1.625
- Rouge1: 47.446
- Rouge2: 25.858
- RougeL: 43.937
- RougeLsum: 43.961
- Gen Len: 15.395
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/joel89/autotrain-rwlv_summarizer-55443129210
``` |
muhammadfraz/policy_gradient-0 | muhammadfraz | 2023-05-04T15:34:43Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-04T15:34:32Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: policy_gradient-0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DmitriyVasiliev/autotrain-mbart-rua-par-sent-dia-55433129191 | DmitriyVasiliev | 2023-05-04T15:11:16Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:DmitriyVasiliev/autotrain-data-mbart-rua-par-sent-dia",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-05-04T15:00:20Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- DmitriyVasiliev/autotrain-data-mbart-rua-par-sent-dia
co2_eq_emissions:
emissions: 4.268468872564851
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 55433129191
- CO2 Emissions (in grams): 4.2685
## Validation Metrics
- Loss: 1.625
- Rouge1: 5.126
- Rouge2: 1.732
- RougeL: 5.111
- RougeLsum: 5.198
- Gen Len: 33.335
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/DmitriyVasiliev/autotrain-mbart-rua-par-sent-dia-55433129191
``` |
Horsebird/MysticTurducken | Horsebird | 2023-05-04T14:59:18Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-03T11:53:44Z | ---
license: creativeml-openrail-m
---
|
jinlee74/ddpm-butterflies-128 | jinlee74 | 2023-05-04T14:52:01Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2023-05-04T13:37:07Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: /content/drive/MyDrive/image_and_text
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `/content/drive/MyDrive/image_and_text` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Tian7/ddpm-butterflies-128/tensorboard?#scalars)
|
nastorian/finetuning-sentiment-model-3000-samples | nastorian | 2023-05-04T14:51:02Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-04T13:36:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6886
- eval_accuracy: 0.5
- eval_f1: 0.0
- eval_runtime: 292.5232
- eval_samples_per_second: 1.026
- eval_steps_per_second: 0.065
- epoch: 0.04
- step: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dgalik/distilbert-finetuning-hate-speech-score-3000-samples-dropout005-epochs-10 | dgalik | 2023-05-04T14:47:52Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-05-04T10:20:09Z | ---
tags:
- generated_from_trainer
model-index:
- name: distilbert-finetuning-hate-speech-score-3000-samples-dropout005-epochs-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuning-hate-speech-score-3000-samples-dropout005-epochs-10
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5491
- Mse: 1.5491
- Rmse: 1.2446
- Mae: 0.8043
- R2: 0.7225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tooucci/Pixel_Copter | tooucci | 2023-05-04T14:36:03Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-04T14:33:30Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixel_Copter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
soumi-maiti/libri3mix_eend_ss | soumi-maiti | 2023-05-04T14:28:57Z | 1 | 0 | espnet | [
"espnet",
"audio",
"diarization",
"en",
"dataset:librimix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2023-05-04T13:55:47Z | ---
tags:
- espnet
- audio
- diarization
language: en
datasets:
- librimix
license: cc-by-4.0
---
## ESPnet2 DIAR model
### `soumi-maiti/libri3mix_eend_ss`
This model was trained by soumimaiti using librimix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout d837c97c88f13ffe655a30bcff93d814f212b225
pip install -e .
cd egs2/librimix/enh_diar1_2
./run.sh --skip_data_prep false --skip_train true --download_model soumi-maiti/libri3mix_eend_ss
```
## DIAR config
<details><summary>expand</summary>
```
config: conf/tuning/train_diar_enh_convtasnet_2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/diar_enh_train_diar_enh_convtasnet_2_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 50
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss_enh
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 4
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_enh_stats_8k/train/speech_shape
- exp/diar_enh_stats_8k/train/text_shape
- exp/diar_enh_stats_8k/train/speech_ref1_shape
- exp/diar_enh_stats_8k/train/speech_ref2_shape
- exp/diar_enh_stats_8k/train/speech_ref3_shape
- exp/diar_enh_stats_8k/train/noise_ref1_shape
valid_shape_file:
- exp/diar_enh_stats_8k/valid/speech_shape
- exp/diar_enh_stats_8k/valid/text_shape
- exp/diar_enh_stats_8k/valid/speech_ref1_shape
- exp/diar_enh_stats_8k/valid/speech_ref2_shape
- exp/diar_enh_stats_8k/valid/speech_ref3_shape
- exp/diar_enh_stats_8k/valid/noise_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 800
- 80000
- 80000
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 24000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/espnet_rttm
- text
- rttm
- - dump/raw/train/spk1.scp
- speech_ref1
- sound
- - dump/raw/train/spk2.scp
- speech_ref2
- sound
- - dump/raw/train/spk3.scp
- speech_ref3
- sound
- - dump/raw/train/noise1.scp
- noise_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/espnet_rttm
- text
- rttm
- - dump/raw/dev/spk1.scp
- speech_ref1
- sound
- - dump/raw/dev/spk2.scp
- speech_ref2
- sound
- - dump/raw/dev/spk3.scp
- speech_ref3
- sound
- - dump/raw/dev/noise1.scp
- noise_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-07
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
token_list: null
src_token_list: null
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
enh_criterions:
- name: si_snr
conf:
eps: 1.0e-07
wrapper: pit
wrapper_conf:
weight: 1.0
independent_perm: true
diar_num_spk: 3
diar_input_size: 128
enh_model_conf:
loss_type: si_snr
asr_model_conf:
ctc_weight: 0.5
interctc_weight: 0.0
ignore_id: -1
lsm_weight: 0.0
length_normalized_loss: false
report_cer: true
report_wer: true
sym_space: <space>
sym_blank: <blank>
extract_feats_in_collect_stats: true
st_model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
diar_model_conf:
diar_weight: 0.2
attractor_weight: 0.2
subtask_series:
- enh
- diar
model_conf:
calc_enh_loss: true
bypass_enh_prob: 0
use_preprocessor: true
token_type: bpe
bpemodel: null
src_token_type: bpe
src_bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
enh_encoder: conv
enh_encoder_conf:
channel: 512
kernel_size: 16
stride: 8
enh_separator: tcn_nomask
enh_separator_conf:
layer: 8
stack: 3
bottleneck_dim: 128
hidden_dim: 512
kernel: 3
causal: false
norm_type: gLN
enh_decoder: conv
enh_decoder_conf:
channel: 512
kernel_size: 16
stride: 8
enh_mask_module: multi_mask
enh_mask_module_conf:
max_num_spk: 3
mask_nonlinear: relu
bottleneck_dim: 128
frontend: null
frontend_conf: {}
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
asr_preencoder: null
asr_preencoder_conf: {}
asr_encoder: rnn
asr_encoder_conf: {}
asr_postencoder: null
asr_postencoder_conf: {}
asr_decoder: rnn
asr_decoder_conf: {}
st_preencoder: null
st_preencoder_conf: {}
st_encoder: rnn
st_encoder_conf: {}
st_postencoder: null
st_postencoder_conf: {}
st_decoder: rnn
st_decoder_conf: {}
st_extra_asr_decoder: rnn
st_extra_asr_decoder_conf: {}
st_extra_mt_decoder: rnn
st_extra_mt_decoder_conf: {}
diar_frontend: null
diar_frontend_conf: {}
diar_specaug: null
diar_specaug_conf: {}
diar_normalize: utterance_mvn
diar_normalize_conf: {}
diar_encoder: transformer
diar_encoder_conf:
input_layer: conv2d8
num_blocks: 4
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.1
diar_decoder: linear
diar_decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf:
win_length: 256
hop_length: 64
diar_attractor: rnn
diar_attractor_conf:
unit: 256
layer: 1
dropout: 0.0
attractor_grad: true
required:
- output_dir
version: '202205'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
brusooo/flowers_classification | brusooo | 2023-05-04T14:28:38Z | 4 | 0 | keras | [
"keras",
"tf-keras",
"image-classification",
"region:us"
] | image-classification | 2023-05-04T06:46:02Z | ---
library_name: keras
inference: false
tags:
- image-classification
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
zerohell/rag-bart-bleu_error | zerohell | 2023-05-04T14:24:27Z | 117 | 1 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"text-generation-inference",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-04T13:34:31Z | ---
language:
- zh
metrics:
- bleu
tags:
- text-generation-inference
---
## 模型介绍
用于问答的预训练模型。提供的是一个bart-base-zh模型。
## 模型细节
采用dureader-zhidao训练的模型。
## 快速开始
输入为:问题。上下文。
输出为:答案。 |
JoBuettner/Reinforce-Pixelcopter-v2 | JoBuettner | 2023-05-04T14:22:15Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-04T14:22:10Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.70 +/- 27.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JoBuettner/Reinforce-Pixelcopter | JoBuettner | 2023-05-04T14:10:22Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-03T14:30:31Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.50 +/- 26.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ysige/xlm-roberta-base-finetuned-panx-it | ysige | 2023-05-04T14:07:57Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-04T13:37:55Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8138775510204082
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2523
- F1: 0.8139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7927 | 1.0 | 70 | 0.3253 | 0.7486 |
| 0.2907 | 2.0 | 140 | 0.2704 | 0.7960 |
| 0.1883 | 3.0 | 210 | 0.2523 | 0.8139 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cpu
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Isente/distilledbert-finetuned-squad-assignment | Isente | 2023-05-04T13:59:56Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-04T12:20:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilledbert-finetuned-squad-assignment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilledbert-finetuned-squad-assignment
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
dpuccine/bert-finetuned-ner | dpuccine | 2023-05-04T13:53:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-09T13:06:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9323407775020678
- name: Recall
type: recall
value: 0.9485021878155503
- name: F1
type: f1
value: 0.9403520480520563
- name: Accuracy
type: accuracy
value: 0.9859304173779949
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0624
- Precision: 0.9323
- Recall: 0.9485
- F1: 0.9404
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.087 | 1.0 | 1756 | 0.0696 | 0.9183 | 0.9406 | 0.9293 | 0.9832 |
| 0.0378 | 2.0 | 3512 | 0.0564 | 0.9355 | 0.9502 | 0.9428 | 0.9863 |
| 0.0194 | 3.0 | 5268 | 0.0624 | 0.9323 | 0.9485 | 0.9404 | 0.9859 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
David1785/finetuned-bert-mrpc | David1785 | 2023-05-04T13:45:40Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-04T10:44:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8877551020408163
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4588
- Accuracy: 0.8382
- F1: 0.8878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.579 | 1.0 | 230 | 0.4858 | 0.7745 | 0.8521 |
| 0.4163 | 2.0 | 460 | 0.4477 | 0.8088 | 0.8721 |
| 0.2533 | 3.0 | 690 | 0.4588 | 0.8382 | 0.8878 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SD1945/face-r1n0sdai-3.0 | SD1945 | 2023-05-04T13:44:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-03T16:12:15Z | ---
license: creativeml-openrail-m
---
|
ysige/xlm-roberta-base-finetuned-panx-fr | ysige | 2023-05-04T13:36:58Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-04T12:51:40Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8354006034193765
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2851
- F1: 0.8354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6033 | 1.0 | 191 | 0.3351 | 0.7857 |
| 0.2614 | 2.0 | 382 | 0.2998 | 0.8198 |
| 0.1756 | 3.0 | 573 | 0.2851 | 0.8354 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cpu
- Datasets 2.11.0
- Tokenizers 0.13.3
|
5w4n/super-lora | 5w4n | 2023-05-04T13:36:11Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-04T13:34:26Z | ---
license: creativeml-openrail-m
---
|
YuYu17/Suiko | YuYu17 | 2023-05-04T13:20:16Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-10T07:34:18Z | ---
license: creativeml-openrail-m
---
|
helenai/Alireza1044-albert-base-v2-stsb-ov | helenai | 2023-05-04T13:15:48Z | 5 | 0 | transformers | [
"transformers",
"openvino",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-04T13:15:34Z | ---
language:
- en
tags:
- openvino
---
# Alireza1044/albert-base-v2-stsb
This is the [Alireza1044/albert-base-v2-stsb](https://huggingface.co/Alireza1044/albert-base-v2-stsb) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/Alireza1044-albert-base-v2-stsb-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForSequenceClassification.from_pretrained(model_id)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = pipe("I like you. I love you")
print(result)
```
|
Blaxzter/LaBSE-sentence-embeddings | Blaxzter | 2023-05-04T13:14:17Z | 110 | 19 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"sentence_embedding",
"multilingual",
"google",
"sentence-similarity",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"bo",
"bs",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"he",
"hi",
"hmn",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"or",
"pa",
"pl",
"pt",
"ro",
"ru",
"rw",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tk",
"tl",
"tr",
"tt",
"ug",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:CommonCrawl",
"dataset:Wikipedia",
"arxiv:2007.01852",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-04-30T12:58:54Z | ---
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- bo
- bs
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- or
- pa
- pl
- pt
- ro
- ru
- rw
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
tags:
- bert
- sentence_embedding
- multilingual
- google
- sentence-similarity
license: apache-2.0
datasets:
- CommonCrawl
- Wikipedia
---
Copy of setu4993/LaBSE that returns the sentence embeddings (pooler_output) and implements caching
Original Model Card:
# LaBSE
## Model description
Language-agnostic BERT Sentence Encoder (LaBSE) is a BERT-based model trained for sentence embedding for 109 languages. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.
- Model: [HuggingFace's model hub](https://huggingface.co/setu4993/LaBSE).
- Paper: [arXiv](https://arxiv.org/abs/2007.01852).
- Original model: [TensorFlow Hub](https://tfhub.dev/google/LaBSE/2).
- Blog post: [Google AI Blog](https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html).
- Conversion from TensorFlow to PyTorch: [GitHub](https://github.com/setu4993/convert-labse-tf-pt).
This is migrated from the v2 model on the TF Hub, which uses dict-based input. The embeddings produced by both the versions of the model are [equivalent](https://github.com/setu4993/convert-labse-tf-pt/blob/ec3a019159a54ed6493181a64486c2808c01f216/tests/test_conversion.py#L31).
## Usage
Using the model:
```python
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("setu4993/LaBSE")
model = BertModel.from_pretrained("setu4993/LaBSE")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
```
To get the sentence embeddings, use the pooler output:
```python
english_embeddings = english_outputs.pooler_output
```
Output for other languages:
```python
italian_sentences = [
"cane",
"I cuccioli sono carini.",
"Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.",
]
japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"]
italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True)
japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
italian_outputs = model(**italian_inputs)
japanese_outputs = model(**japanese_inputs)
italian_embeddings = italian_outputs.pooler_output
japanese_embeddings = japanese_outputs.pooler_output
```
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
```python
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, italian_embeddings))
print(similarity(english_embeddings, japanese_embeddings))
print(similarity(italian_embeddings, japanese_embeddings))
```
## Details
Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2007.01852).
### BibTeX entry and citation info
```bibtex
@misc{feng2020languageagnostic,
title={Language-agnostic BERT Sentence Embedding},
author={Fangxiaoyu Feng and Yinfei Yang and Daniel Cer and Naveen Arivazhagan and Wei Wang},
year={2020},
eprint={2007.01852},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
AliiaR/DialoGPT-medium-empathetic-dialogues | AliiaR | 2023-05-04T13:12:04Z | 149 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"psychology",
"dialogues",
"empathy",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-04T10:51:55Z | ---
language:
- en
pipeline_tag: conversational
tags:
- psychology
- dialogues
- empathy
- gpt2
---
## Model description
DialoGPT finetuned on empathetic dialogues
## Training data
It was trained on a large corpus of text, including some emotionally engaging datasets such as the "Facebook Empathetic Dialogues" dataset containing 25k conversations.
A dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems.
You can find a dataset [here](https://www.kaggle.com/datasets/atharvjairath/empathetic-dialogues-facebook-ai).
### How to use
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("AliiaR/DialoGPT-medium-empathetic-dialogues")
>>> model = AutoModelForCausalLM.from_pretrained("AliiaR/DialoGPT-medium-empathetic-dialogues")
``` |
HilbertS/Reinforce-CartPole1 | HilbertS | 2023-05-04T13:11:30Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-04T13:00:07Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
climatebert/distilroberta-base-climate-d-s | climatebert | 2023-05-04T13:05:02Z | 135 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"climate",
"en",
"arxiv:2110.12010",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
tags:
- climate
---
# Model Card for distilroberta-base-climate-d-s
## Model Description
This is the ClimateBERT language model based on the DIV-SELECT and SIM-SELECT sample selection strategy.
*Note: We generally recommend choosing the [distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model over this language model (unless you have good reasons not to).*
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
## Climate performance model card
| distilroberta-base-climate-d-s | |
|--------------------------------------------------------------------------|----------------|
| 1. Is the resulting model publicly available? | Yes |
| 2. How much time does the training of the final model take? | 48 hours |
| 3. How much time did all experiments take (incl. hyperparameter search)? | 350 hours |
| 4. What was the power of GPU and CPU? | 0.7 kW |
| 5. At which geo location were the computations performed? | Germany |
| 6. What was the energy mix at the geo location? | 470 gCO2eq/kWh |
| 7. How much CO2eq was emitted to train the final model? | 15.79 kg |
| 8. How much CO2eq was emitted for all experiments? | 115.15 kg |
| 9. What is the average CO2eq emission for the inference of one sample? | 0.62 mg |
| 10. Which positive environmental impact can be expected from this work? | This work can be categorized as a building block tools following Jin et al (2021). It supports the training of NLP models in the field of climate change and, thereby, have a positive environmental impact in the future. |
| 11. Comments | Block pruning could decrease CO2eq emissions |
## Citation Information
```bibtex
@inproceedings{wkbl2022climatebert,
title={{ClimateBERT: A Pretrained Language Model for Climate-Related Text}},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
booktitle={Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges},
year={2022},
doi={https://doi.org/10.48550/arXiv.2212.13631},
}
``` |
DataVare/datavare-nsf-to-pst-converter | DataVare | 2023-05-04T12:25:25Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-04T12:15:49Z | Users may easily use this capability on their PC using the DataVare NSF to PST converter software, which is a very effective tool for them. The utility operates without any data loss on Windows platforms. With the NSF to PST Converter, the user can convert their file with ease. By converting 10 objects, the app allows users to test it out for free. Those with and without technical expertise can both utilize this program. The function is accessible in any version of Outlook. It may be used with MS Outlook versions 2003, 2007, 2010, 2013, 2016, and 2019, among others. Without using Outlook, you can bulk import NSF files into the PST file format. If they are having problems, it can be used by both novice and experienced users. Additionally, users are able to obtain the software and use it with any version of Windows.
Read more :- https://www.datavare.com/software/nsf-to-pst-converter-expert.html |
steveabecassis/mt5-small-finetuned-xsum | steveabecassis | 2023-05-04T12:17:44Z | 3 | 0 | transformers | [
"transformers",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-01-04T10:48:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-xsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5196
- Rouge1: 0.3378
- Rouge2: 0.275
- Rougel: 0.3372
- Rougelsum: 0.3367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 21 | 11.8500 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 42 | 11.1279 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 3.0 | 63 | 10.0382 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 4.0 | 84 | 9.1579 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 5.0 | 105 | 8.6827 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 6.0 | 126 | 7.3651 | 0.0028 | 0.0016 | 0.0028 | 0.0028 |
| No log | 7.0 | 147 | 6.4400 | 0.019 | 0.0129 | 0.0191 | 0.0197 |
| No log | 8.0 | 168 | 5.2631 | 0.0272 | 0.0229 | 0.0288 | 0.0288 |
| No log | 9.0 | 189 | 4.5832 | 0.1095 | 0.0688 | 0.1053 | 0.1051 |
| No log | 10.0 | 210 | 4.2350 | 0.1263 | 0.0824 | 0.1216 | 0.1235 |
| No log | 11.0 | 231 | 3.9249 | 0.1541 | 0.1051 | 0.1513 | 0.1532 |
| No log | 12.0 | 252 | 3.5469 | 0.1701 | 0.1156 | 0.1665 | 0.1683 |
| No log | 13.0 | 273 | 3.3689 | 0.2672 | 0.2095 | 0.2667 | 0.2659 |
| No log | 14.0 | 294 | 3.1733 | 0.3102 | 0.2483 | 0.3103 | 0.3104 |
| No log | 15.0 | 315 | 3.0810 | 0.3073 | 0.2457 | 0.3074 | 0.3071 |
| No log | 16.0 | 336 | 3.0005 | 0.3071 | 0.2451 | 0.3075 | 0.3069 |
| No log | 17.0 | 357 | 2.9663 | 0.3015 | 0.2364 | 0.3022 | 0.3018 |
| No log | 18.0 | 378 | 2.8718 | 0.3195 | 0.2583 | 0.3197 | 0.3187 |
| No log | 19.0 | 399 | 2.8061 | 0.3159 | 0.2554 | 0.316 | 0.3143 |
| No log | 20.0 | 420 | 2.7009 | 0.3351 | 0.273 | 0.3338 | 0.3341 |
| No log | 21.0 | 441 | 2.6307 | 0.3384 | 0.2763 | 0.3382 | 0.3381 |
| No log | 22.0 | 462 | 2.6006 | 0.3364 | 0.2743 | 0.3362 | 0.3357 |
| No log | 23.0 | 483 | 2.5819 | 0.3334 | 0.2712 | 0.3331 | 0.3333 |
| 13.1102 | 24.0 | 504 | 2.5606 | 0.3309 | 0.269 | 0.3302 | 0.3305 |
| 13.1102 | 25.0 | 525 | 2.5458 | 0.338 | 0.2744 | 0.3369 | 0.3373 |
| 13.1102 | 26.0 | 546 | 2.5366 | 0.3361 | 0.2715 | 0.3352 | 0.3352 |
| 13.1102 | 27.0 | 567 | 2.5301 | 0.3413 | 0.2787 | 0.3408 | 0.3406 |
| 13.1102 | 28.0 | 588 | 2.5236 | 0.341 | 0.2783 | 0.3402 | 0.3401 |
| 13.1102 | 29.0 | 609 | 2.5206 | 0.3405 | 0.2779 | 0.3399 | 0.3397 |
| 13.1102 | 30.0 | 630 | 2.5196 | 0.3378 | 0.275 | 0.3372 | 0.3367 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
steveabecassis/t5-base-finetuned-xsum | steveabecassis | 2023-05-04T12:17:44Z | 3 | 0 | transformers | [
"transformers",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-29T08:29:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-xsum
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 21 | 0.3747 | 0.7975 | 0.7421 | 0.7924 | 0.7932 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
steveabecassis/huji_MediQA | steveabecassis | 2023-05-04T12:17:44Z | 3 | 0 | transformers | [
"transformers",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-02-12T19:59:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: huji_MediQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huji_MediQA
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6868
- Rouge1: 0.1617
- Rouge2: 0.065
- Rougel: 0.1598
- Rougelsum: 0.1617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 1 | 2.6868 | 0.1617 | 0.065 | 0.1598 | 0.1617 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
abhijitgayen/cogo-flan-t5 | abhijitgayen | 2023-05-04T12:10:55Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-04T11:28:35Z |
# Cogo Flan-t5 Model
This is a Fine-tune **FLAN-T5 model**, train with user-admin message
[DataSet](https://huggingface.co/datasets/abhijitgayen/cogo_chat)
|
vovikdrg/ppo-lunar-lander | vovikdrg | 2023-05-04T12:10:53Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-02T09:36:05Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -98.64 +/- 69.12
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
ilidiolopes/my_awesome_qa_model | ilidiolopes | 2023-05-04T11:47:30Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-04T11:19:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
VinayakMane47/finetuned-en-to-mar | VinayakMane47 | 2023-05-04T10:59:22Z | 62 | 0 | transformers | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-04T10:34:51Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: VinayakMane47/finetuned-en-to-mar
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# VinayakMane47/finetuned-en-to-mar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mr](https://huggingface.co/Helsinki-NLP/opus-mt-en-mr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5415
- Validation Loss: 1.2289
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 4401, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.2978 | 1.5919 | 0 |
| 1.7627 | 1.3188 | 1 |
| 1.5415 | 1.2289 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pradeep4321/valves1 | pradeep4321 | 2023-05-04T10:56:42Z | 194 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-04T09:50:25Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: valves1
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.3333333432674408
---
# valves1
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### animals

#### birds
 |
Apurv/my_awesome_qa_model | Apurv | 2023-05-04T10:26:03Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-04T10:09:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3312 |
| 2.7012 | 2.0 | 500 | 1.7361 |
| 2.7012 | 3.0 | 750 | 1.6470 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cainiao-AI/GreedRL | Cainiao-AI | 2023-05-04T10:17:08Z | 0 | 24 | null | [
"Deep Reinforcement Learning",
"Combinatorial Optimization",
"Vehicle Routing Problem",
"reinforcement-learning",
"license:apache-2.0",
"region:us"
] | reinforcement-learning | 2023-04-28T02:23:51Z | ---
license: apache-2.0
pipeline_tag: reinforcement-learning
tags:
- Deep Reinforcement Learning
- Combinatorial Optimization
- Vehicle Routing Problem
---

# 🤠GreedRL
## Overview
- 🤠GreedRL is a Deep Reinforcement Learning (DRL) based solver that can solve various types of problems, such as TSP, VRPs (CVRP, VRPTW, VRPPD, etc), Order Batching Problem, Knapsack Problem, etc.
- 🤠GreedRL achieves very high performance by running on GPU while generating high quality solutions.
**1200 times faster** than [Google OR-Tools](https://developers.google.com/optimization) for large-scale (>=1000 nodes) CVRP, and the solution quality is improved by **about 3%**.
## 🏆Award
- Entering the finalists of [INFORMS 2021 Franz Edelman Award](https://www.informs.org/Resource-Center/Video-Library/Edelman-Competition-Videos/2021-Edelman-Competition-Videos/2021-Edelman-Finalist-Alibaba)
- Obtain [The Second Class Prize of Scientific and Technological Progress Award](https://www.ccf.org.cn/Awards/Awards/2022-11-08/776110.shtml).
## Editions
We have delivered the following two editions of 🤠GreedRL for users.
- **The Community Edition** is open source and available to [download](https://huggingface.co/Cainiao-AI/GreedRL).
- **The Enterprise Edition** has a higher performance implementation than **The Community Edition** (about 50 times faster), especially when solving larg-scale problems. For more informations, please contact <a href="mailto:[email protected]">us</a>.
## Architecture

## COPs Modeling examples
### Standard problems
#### Capacitated Vehicle Routing Problem (CVRP)
<details>
<summary>CVRP</summary>
```python
from greedrl.feature import *
from greedrl.variable import *
from greedrl.function import *
from greedrl import Problem, Solution, Solver
from greedrl import runner
features = [continuous_feature('task_demand'),
continuous_feature('worker_weight_limit'),
continuous_feature('distance_matrix'),
variable_feature('distance_this_to_task'),
variable_feature('distance_task_to_end')]
variables = [task_demand_now('task_demand_now', feature='task_demand'),
task_demand_now('task_demand_this', feature='task_demand', only_this=True),
feature_variable('task_weight'),
worker_variable('worker_weight_limit'),
worker_used_resource('worker_used_weight', task_require='task_weight'),
edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True),
edge_variable('distance_this_to_task', feature='distance_matrix', this_to_task=True),
edge_variable('distance_task_to_end', feature='distance_matrix', task_to_end=True)]
class Constraint:
def do_task(self):
return self.task_demand_this
def mask_task(self):
# 已经完成的任务
mask = self.task_demand_now <= 0
# 车辆容量限制
worker_weight_limit = self.worker_weight_limit - self.worker_used_weight
mask |= self.task_demand_now * self.task_weight > worker_weight_limit[:, None]
return mask
def finished(self):
return torch.all(self.task_demand_now <= 0, 1)
class Objective:
def step_worker_end(self):
return self.distance_last_to_this
def step_task(self):
return self.distance_last_to_this
```
</details>
#### Pickup and Delivery Problem with Time Windows (PDPTW)
<details>
<summary>PDPTW</summary>
```python
from greedrl.model import runner
from greedrl.feature import *
from greedrl.variable import *
from greedrl.function import *
from greedrl import Problem, Solution, Solver
features = [local_category('task_group'),
global_category('task_priority', 2),
variable_feature('distance_this_to_task'),
variable_feature('distance_task_to_end')]
variables = [task_demand_now('task_demand_now', feature='task_demand'),
task_demand_now('task_demand_this', feature='task_demand', only_this=True),
feature_variable('task_weight'),
feature_variable('task_group'),
feature_variable('task_priority'),
feature_variable('task_due_time2', feature='task_due_time'),
task_variable('task_due_time'),
task_variable('task_service_time'),
task_variable('task_due_time_penalty'),
worker_variable('worker_basic_cost'),
worker_variable('worker_distance_cost'),
worker_variable('worker_due_time'),
worker_variable('worker_weight_limit'),
worker_used_resource('worker_used_weight', task_require='task_weight'),
worker_used_resource('worker_used_time', 'distance_matrix', 'task_service_time', 'task_ready_time',
'worker_ready_time'),
edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True),
edge_variable('distance_this_to_task', feature='distance_matrix', this_to_task=True),
edge_variable('distance_task_to_end', feature='distance_matrix', task_to_end=True)]
class Constraint:
def do_task(self):
return self.task_demand_this
def mask_worker_end(self):
return task_group_split(self.task_group, self.task_demand_now <= 0)
def mask_task(self):
mask = self.task_demand_now <= 0
mask |= task_group_priority(self.task_group, self.task_priority, mask)
worker_used_time = self.worker_used_time[:, None] + self.distance_this_to_task
mask |= (worker_used_time > self.task_due_time2) & (self.task_priority == 0)
# 容量约束
worker_weight_limit = self.worker_weight_limit - self.worker_used_weight
mask |= self.task_demand_now * self.task_weight > worker_weight_limit[:, None]
return mask
def finished(self):
return torch.all(self.task_demand_now <= 0, 1)
class Objective:
def step_worker_start(self):
return self.worker_basic_cost
def step_worker_end(self):
feasible = self.worker_used_time <= self.worker_due_time
return self.distance_last_to_this * self.worker_distance_cost, feasible
def step_task(self):
worker_used_time = self.worker_used_time - self.task_service_time
feasible = worker_used_time <= self.task_due_time
feasible &= worker_used_time <= self.worker_due_time
cost = self.distance_last_to_this * self.worker_distance_cost
return torch.where(feasible, cost, cost + self.task_due_time_penalty), feasible
```
</details>
#### VRP with Time Windows (VRPTW)
<details>
<summary>VRPTW</summary>
```python
from greedrl import Problem, Solution, Solver
from greedrl.feature import *
from greedrl.variable import *
from greedrl.function import *
from greedrl.model import runner
from greedrl.myenv import VrptwEnv
features = [continuous_feature('worker_weight_limit'),
continuous_feature('worker_ready_time'),
continuous_feature('worker_due_time'),
continuous_feature('worker_basic_cost'),
continuous_feature('worker_distance_cost'),
continuous_feature('task_demand'),
continuous_feature('task_weight'),
continuous_feature('task_ready_time'),
continuous_feature('task_due_time'),
continuous_feature('task_service_time'),
continuous_feature('distance_matrix')]
variables = [task_demand_now('task_demand_now', feature='task_demand'),
task_demand_now('task_demand_this', feature='task_demand', only_this=True),
feature_variable('task_weight'),
feature_variable('task_due_time'),
feature_variable('task_ready_time'),
feature_variable('task_service_time'),
worker_variable('worker_weight_limit'),
worker_variable('worker_due_time'),
worker_variable('worker_basic_cost'),
worker_variable('worker_distance_cost'),
worker_used_resource('worker_used_weight', task_require='task_weight'),
worker_used_resource('worker_used_time', 'distance_matrix', 'task_service_time', 'task_ready_time',
'worker_ready_time'),
edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True),
edge_variable('distance_this_to_task', feature='distance_matrix', this_to_task=True),
edge_variable('distance_task_to_end', feature='distance_matrix', task_to_end=True)]
class Constraint:
def do_task(self):
return self.task_demand_this
def mask_task(self):
# 已经完成的任务
mask = self.task_demand_now <= 0
# 车辆容量限制
worker_weight_limit = self.worker_weight_limit - self.worker_used_weight
mask |= self.task_demand_now * self.task_weight > worker_weight_limit[:, None]
worker_used_time = self.worker_used_time[:, None] + self.distance_this_to_task
mask |= worker_used_time > self.task_due_time
worker_used_time = torch.max(worker_used_time, self.task_ready_time)
worker_used_time += self.task_service_time
worker_used_time += self.distance_task_to_end
mask |= worker_used_time > self.worker_due_time[:, None]
return mask
def finished(self):
return torch.all(self.task_demand_now <= 0, 1)
class Objective:
def step_worker_start(self):
return self.worker_basic_cost
def step_worker_end(self):
return self.distance_last_to_this * self.worker_distance_cost
def step_task(self):
return self.distance_last_to_this * self.worker_distance_cost
```
</details>
#### Travelling Salesman Problem (TSP)
<details>
<summary>TSP</summary>
```python
from greedrl.feature import *
from greedrl.variable import *
from greedrl import Problem
from greedrl import runner
features = [continuous_feature('task_location'),
variable_feature('distance_this_to_task'),
variable_feature('distance_task_to_end')]
variables = [task_demand_now('task_demand_now', feature='task_demand'),
task_demand_now('task_demand_this', feature='task_demand', only_this=True),
edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True),
edge_variable('distance_this_to_task', feature='distance_matrix', this_to_task=True),
edge_variable('distance_task_to_end', feature='distance_matrix', task_to_end=True),
edge_variable('distance_last_to_loop', feature='distance_matrix', last_to_loop=True)]
class Constraint:
def do_task(self):
return self.task_demand_this
def mask_task(self):
mask = self.task_demand_now <= 0
return mask
def mask_worker_end(self):
return torch.any(self.task_demand_now > 0, 1)
def finished(self):
return torch.all(self.task_demand_now <= 0, 1)
class Objective:
def step_worker_end(self):
return self.distance_last_to_loop
def step_task(self):
return self.distance_last_to_this
```
</details>
#### Split Delivery Vehicle Routing Problem (SDVRP)
<details>
<summary>SDVRP</summary>
```python
from greedrl.feature import *
from greedrl.variable import *
from greedrl import Problem
from greedrl import runner
features = [continuous_feature('task_demand'),
continuous_feature('worker_weight_limit'),
continuous_feature('distance_matrix'),
variable_feature('distance_this_to_task'),
variable_feature('distance_task_to_end')]
variables = [task_demand_now('task_demand'),
task_demand_now('task_demand_this', feature='task_demand', only_this=True),
feature_variable('task_weight'),
task_variable('task_weight_this', feature='task_weight'),
worker_variable('worker_weight_limit'),
worker_used_resource('worker_used_weight', task_require='task_weight'),
edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True)]
class Constraint:
def do_task(self):
worker_weight_limit = self.worker_weight_limit - self.worker_used_weight
return torch.min(self.task_demand_this, worker_weight_limit // self.task_weight_this)
def mask_task(self):
mask = self.task_demand <= 0
worker_weight_limit = self.worker_weight_limit - self.worker_used_weight
mask |= self.task_weight > worker_weight_limit[:, None]
return mask
def finished(self):
return torch.all(self.task_demand <= 0, 1)
class Objective:
def step_worker_end(self):
return self.distance_last_to_this
def step_task(self):
return self.distance_last_to_this
```
</details>
### Real-world scenario problems
In addition to being able to solve standard problems, 🤠GreedRL can also model and solve real-world scenario problems, like *Instant Delivery Service* and *Order Batching Problem*.
#### Instant Delivery Service
> Instant Delivery Service are widespread in order dispatching systems of courier delivery services ([Ele.me](https://www.ele.me/), [Meituan](https://waimai.meituan.com/), [UUPaotui](https://www.uupt.com/index.htm), etc).
> Orders are generated in real-time. A number of vehicles are scheduled to serve orders from pickup locations to delivery locations while respecting vehicle capacity. The objective consists in minimizing both total delivery time and overtime penalty.
<details>
<summary>Instant Delivery Service</summary>
```python
from greedrl.feature import *
from greedrl.variable import *
from greedrl.function import *
from greedrl import Problem
from greedrl import runner
features = [local_category('task_order'),
global_category('task_type', 2),
global_category('task_new_order', 2),
variable_feature('time_this_to_task'),
continuous_feature('x_time_matrix'),
continuous_feature('task_due_time_x'),
continuous_feature('worker_task_mask')]
variables = [task_demand_now('task_demand_now', feature='task_demand'),
task_demand_now('task_demand_this', feature='task_demand', only_this=True),
task_variable('task_pickup_this', feature='task_pickup'),
task_variable('task_due_time_this', feature='task_due_time'),
feature_variable('task_order', feature='task_order'),
feature_variable('task_type', feature='task_type'),
feature_variable('task_new_pickup', feature='task_new_pickup'),
feature_variable('worker_task_mask', feature='worker_task_mask'),
worker_count_now('worker_count_now', feature='worker_count'),
worker_variable('worker_min_old_task_this', feature='worker_min_old_task'),
worker_variable('worker_max_new_order_this', feature='worker_max_new_order'),
worker_variable('worker_task_mask_this', feature='worker_task_mask'),
worker_used_resource('worker_used_old_task', task_require='task_old'),
worker_used_resource('worker_used_new_order', task_require='task_new_pickup'),
worker_used_resource('worker_used_time', edge_require='time_matrix'),
edge_variable('time_this_to_task', feature='x_time_matrix', this_to_task=True)]
class Constraint:
def do_task(self):
return self.task_demand_this
def mask_worker_start(self):
mask = self.worker_count_now <= 0
finished = self.task_demand_now <= 0
worker_task_mask = self.worker_task_mask | finished[:, None, :]
mask |= torch.all(worker_task_mask, 2)
return mask
def mask_worker_end(self):
mask = self.worker_used_old_task < self.worker_min_old_task_this
mask |= task_group_split(self.task_order, self.task_demand_now <= 0)
return mask
def mask_task(self):
mask = self.task_demand_now <= 0
mask |= task_group_priority(self.task_order, self.task_type, mask)
worker_max_new_order = self.worker_max_new_order_this - self.worker_used_new_order
mask |= self.task_new_pickup > worker_max_new_order[:, None]
mask |= self.worker_task_mask_this
return mask
def finished(self):
worker_mask = self.worker_count_now <= 0
task_mask = self.task_demand_now <= 0
worker_task_mask = worker_mask[:, :, None] | task_mask[:, None, :]
worker_task_mask |= self.worker_task_mask
batch_size = worker_task_mask.size(0)
worker_task_mask = worker_task_mask.view(batch_size, -1)
return worker_task_mask.all(1)
class Objective:
def step_task(self):
over_time = (self.worker_used_time - self.task_due_time_this).clamp(min=0)
pickup_time = self.worker_used_time * self.task_pickup_this
return self.worker_used_time + over_time + pickup_time
def step_finish(self):
return self.task_demand_now.sum(1) * 1000
```
</details>
#### Order Batching Problem
> The Order Batching Problem is an optimization problem which occurs in a warehouse consists of designing a set of picking batches, such that each customer order (composed by a list of items) is assigned to exactly one batch,
> and each batch has to be collected by a single picker. The objective consists in minimizing both total batching cost (a weighted sum of used numbers of areas, roadways and items) and penalty for exceeding loading limits of pickers.
<details>
<summary>Order Batching Problem</summary>
```python
from greedrl import Problem, Solver
from greedrl.feature import *
from greedrl.variable import *
from greedrl import runner
features = [local_feature('task_area'),
local_feature('task_roadway'),
local_feature('task_area_group'),
sparse_local_feature('task_item_id', 'task_item_num'),
sparse_local_feature('task_item_owner_id', 'task_item_num'),
variable_feature('worker_task_item'),
variable_feature('worker_used_roadway'),
variable_feature('worker_used_area')]
variables = [task_demand_now('task_demand_now', feature='task_demand'),
task_demand_now('task_demand_this', feature='task_demand', only_this=True),
feature_variable('task_item_id'),
feature_variable('task_item_num'),
feature_variable('task_item_owner_id'),
feature_variable('task_area'),
feature_variable('task_area_group'),
feature_variable('task_load'),
feature_variable('task_group'),
worker_variable('worker_load_limit'),
worker_variable('worker_area_limit'),
worker_variable('worker_area_group_limit'),
worker_task_item('worker_task_item', item_id='task_item_id', item_num='task_item_num'),
worker_task_item('worker_task_item_owner', item_id='task_item_owner_id', item_num='task_item_num'),
worker_used_resource('worker_used_load', task_require='task_load'),
worker_used_resource('worker_used_area', task_require='task_area'),
worker_used_resource('worker_used_roadway', task_require='task_roadway'),
worker_used_resource('worker_used_area_group', task_require='task_area_group')]
class Constraint:
def do_task(self):
return self.task_demand_this
def mask_worker_end(self):
return self.worker_used_load < self.worker_load_limit
def mask_task(self):
# completed tasks
mask = self.task_demand_now <= 0
# mask |= task_group_priority(self.task_group, self.task_out_stock_time, mask)
NT = self.task_item_id.size(1)
worker_task_item = self.worker_task_item[:, None, :]
worker_task_item = worker_task_item.expand(-1, NT, -1)
task_item_in_worker = worker_task_item.gather(2, self.task_item_id.long())
task_item_in_worker = (task_item_in_worker > 0) & (self.task_item_num > 0)
worker_task_item_owner = self.worker_task_item_owner[:, None, :]
worker_task_item_owner = worker_task_item_owner.expand(-1, NT, -1)
task_item_owner_in_worker = worker_task_item_owner.gather(2, self.task_item_owner_id.long())
task_item_owner_in_worker = (task_item_owner_in_worker > 0) & (self.task_item_num > 0)
#
mask |= torch.any(task_item_in_worker & ~task_item_owner_in_worker, 2)
worker_load_limit = self.worker_load_limit - self.worker_used_load
mask |= (self.task_load > worker_load_limit[:, None])
task_area = self.task_area + self.worker_used_area[:, None, :]
task_area_num = task_area.clamp(0, 1).sum(2, dtype=torch.int32)
mask |= (task_area_num > self.worker_area_limit[:, None])
tak_area_group = self.task_area_group + self.worker_used_area_group[:, None, :]
tak_area_group_num = tak_area_group.clamp(0, 1).sum(2, dtype=torch.int32)
mask |= (tak_area_group_num > self.worker_area_group_limit[:, None])
return mask
def finished(self):
return torch.all(self.task_demand_now <= 0, 1)
class Objective:
def step_worker_end(self):
area_num = self.worker_used_area.clamp(0, 1).sum(1)
roadway_num = self.worker_used_roadway.clamp(0, 1).sum(1)
item_num = self.worker_task_item.clamp(0, 1).sum(1)
penalty = (self.worker_load_limit - self.worker_used_load) * 10
return area_num * 100 + roadway_num * 10 + item_num + penalty
```
</details>
#
#
# Getting started
## Description
We are delighted to release 🤠GreedRL Community Edition, as well as example of training and testing scripts for the standard Capacitated VRP (CVRP), you can download it and get started.
## Test environment
🤠GreedRL Community Edition has been tested on Ubuntu 18.04 with GCC compiler v7.5.0 and CUDA version 11.4, and a [Miniconda](https://docs.conda.io/en/latest/miniconda.html#system-requirements) distribution with Python 3.8. We recommend using a similar configuration to avoid any possiblem compilation issue.
## Installation
First, clone the repository.
```aidl
$ git clone https://huggingface.co/Cainiao-AI/GreedRL
```
Then, create and activate a python environment using conda, and install required packages.
```aidl
$ conda create -n python38 python==3.8
$ source activate python38
$ pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu113
```
Finally, compile and add the resulting library `greedrl` to the `PYTHONPATH`
```aidl
$ python setup.py build
$ export PYTHONPATH={your_current_path}/build/lib.linux-x86_64-cpython-38/:$PYTHONPATH
```
## CVRP Training
1. Training data
We use generated data for the training phase, the customers and depot locations are randomly generated in the unit square [0,1] X [0,1]. For CVRP, we assume that the demand of each node is a discrete number in {1,...,9}, chosen uniformly at random, and each vehicle has a default capacity of 50.
2. Start training
```python
$ cd examples/cvrp
$ python train.py --model_filename cvrp_100.pt --problem_size 100
```
## CVRP Testing
After training process, you'll get a trained model, like `cvrp_100.pt`, that you can use for test.
```python
$ cd examples/cvrp
$ python solve.py --device cpu --model_name cvrp_100.pt --problem_size 100
```
# Support
We look forward you to downloading it, using it, and opening discussion if you encounter any problems or have ideas on building an even better experience.
For commercial enquiries, please contact <a href="mailto:[email protected]">us</a>.
# Citation
```
@article{hu2022alibaba,
title={Alibaba vehicle routing algorithms enable rapid pick and delivery},
author={Hu, Haoyuan and Zhang, Ying and Wei, Jiangwen and Zhan, Yang and Zhang, Xinhui and Huang, Shaojian and Ma, Guangrui and Deng, Yuming and Jiang, Siwei},
journal={INFORMS Journal on Applied Analytics},
volume={52},
number={1},
pages={27--41},
year={2022},
publisher={INFORMS}
}
```
|
quantumaikr/KoreanLM | quantumaikr | 2023-05-04T10:16:45Z | 184 | 26 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"vicuna",
"ko",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-03T07:37:23Z | ---
language:
- ko
- en
pipeline_tag: text-generation
tags:
- vicuna
- llama
---
<p align="center" width="100%">
<img src="https://i.imgur.com/snFDU0P.png" alt="KoreanLM icon" style="width: 500px; display: block; margin: auto; border-radius: 10%;">
</p>
# KoreanLM: 한국어 언어모델 프로젝트
KoreanLM은 한국어 언어모델을 개발하기 위한 오픈소스 프로젝트입니다. 현재 대부분의 언어모델들은 영어에 초점을 맞추고 있어, 한국어에 대한 학습이 상대적으로 부족하고 토큰화 과정에서 비효율적인 경우가 있습니다. 이러한 문제를 해결하고 한국어에 최적화된 언어모델을 제공하기 위해 KoreanLM 프로젝트를 시작하게 되었습니다.
## 프로젝트 목표
1. 한국어에 특화된 언어모델 개발: 한국어의 문법, 어휘, 문화적 특성을 반영하여 한국어를 더 정확하게 이해하고 생성할 수 있는 언어모델을 개발합니다.
2. 효율적인 토큰화 방식 도입: 한국어 텍스트의 토큰화 과정에서 효율적이고 정확한 분석이 가능한 새로운 토큰화 방식을 도입하여 언어모델의 성능을 향상시킵니다.
3. 거대 언어모델의 사용성 개선: 현재 거대한 사이즈의 언어모델들은 기업이 자사의 데이터를 파인튜닝하기 어려운 문제가 있습니다. 이를 해결하기 위해 한국어 언어모델의 크기를 조절하여 사용성을 개선하고, 자연어 처리 작업에 더 쉽게 적용할 수 있도록 합니다.
## 사용 방법
KoreanLM은 GitHub 저장소를 통해 배포됩니다. 프로젝트를 사용하려면 다음과 같은 방법으로 설치하실 수 있습니다.
```bash
git clone https://github.com/quantumaikr/KoreanLM.git
cd KoreanLM
pip install -r requirements.txt
```
## 예제
다음은 transformers 라이브러리를 통해 모델과 토크나이저를 로딩하는 예제입니다.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained("quantumaikr/KoreanLM")
tokenizer = transformers.AutoTokenizer.from_pretrained("quantumaikr/KoreanLM")
```
## 훈련 (파인튜닝)
```bash
torchrun --nproc_per_node=4 --master_port=1004 train.py \
--model_name_or_path quantumaikr/KoreanLM \
--data_path korean_data.json \
--num_train_epochs 3 \
--cache_dir './data' \
--bf16 True \
--tf32 True \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 500 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'OPTDecoderLayer' \
```
```bash
pip install deepspeed
torchrun --nproc_per_node=4 --master_port=1004 train.py \
--deepspeed "./deepspeed.json" \
--model_name_or_path quantumaikr/KoreanLM \
--data_path korean_data.json \
--num_train_epochs 3 \
--cache_dir './data' \
--bf16 True \
--tf32 True \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
```
## 훈련 (LoRA)
```bash
python finetune-lora.py \
--base_model 'quantumaikr/KoreanLM' \
--data_path './korean_data.json' \
--output_dir './KoreanLM-LoRA' \
--cache_dir './data'
```
## 추론
```bash
python generate.py \
--load_8bit \
--share_gradio \
--base_model 'quantumaikr/KoreanLM' \
--lora_weights 'quantumaikr/KoreanLM-LoRA' \
--cache_dir './data'
```
## 사전학습 모델 공개 및 웹 데모
[학습모델](https://huggingface.co/quantumaikr/KoreanLM/tree/main)
<i>* 데모 링크는 추후 공계예정</i>
## 기여방법
1. 이슈 제기: KoreanLM 프로젝트와 관련된 문제점이나 개선사항을 이슈로 제기해주세요.
2. 코드 작성: 개선사항이나 새로운 기능을 추가하기 위해 코드를 작성하실 수 있습니다. 작성된 코드는 Pull Request를 통해 제출해주시기 바랍니다.
3. 문서 작성 및 번역: 프로젝트의 문서 작성이나 번역 작업에 참여하여 프로젝트의 질을 높여주세요.
4. 테스트 및 피드백: 프로젝트를 사용하면서 발견한 버그나 개선사항을 피드백해주시면 큰 도움이 됩니다.
## 라이선스
KoreanLM 프로젝트는 Apache 2.0 License 라이선스를 따릅니다. 프로젝트를 사용하실 때 라이선스에 따라 주의사항을 지켜주시기 바랍니다.
## 기술 문의
KoreanLM 프로젝트와 관련된 문의사항이 있으시면 이메일 또는 GitHub 이슈를 통해 문의해주시기 바랍니다. 이 프로젝트가 한국어 언어모델에 대한 연구와 개발에 도움이 되길 바라며, 많은 관심과 참여 부탁드립니다.
이메일: [email protected]
---
This repository has implementations inspired by [open_llama](https://github.com/openlm-research/open_llama), [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [alpaca-lora](https://github.com/tloen/alpaca-lora) projects. |
Thibone14/mt5-small-finetuned-amazon-en-es | Thibone14 | 2023-05-04T10:14:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-04T02:39:42Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Thibone14/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Thibone14/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0920
- Validation Loss: 3.3654
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.5484 | 4.1929 | 0 |
| 5.8530 | 3.7515 | 1 |
| 5.1666 | 3.6129 | 2 |
| 4.7436 | 3.5260 | 3 |
| 4.4768 | 3.4513 | 4 |
| 4.2940 | 3.4087 | 5 |
| 4.1478 | 3.3781 | 6 |
| 4.0920 | 3.3654 | 7 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
salwakr1/Is_there_relation | salwakr1 | 2023-05-04T10:09:10Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-01T07:17:59Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: Is_there_relation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Is_there_relation
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1011
- Macro F1: 0.9873
- Precision: 0.9875
- Recall: 0.9873
- Kappa: 0.9708
- Accuracy: 0.9873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 25
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall | Kappa | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 280 | 0.0828 | 0.9746 | 0.9747 | 0.9745 | 0.9413 | 0.9745 |
| 0.1162 | 2.0 | 560 | 0.1149 | 0.9684 | 0.9699 | 0.9682 | 0.9278 | 0.9682 |
| 0.1162 | 3.0 | 840 | 0.0942 | 0.9852 | 0.9855 | 0.9851 | 0.9659 | 0.9851 |
| 0.0231 | 4.0 | 1120 | 0.0749 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
| 0.0231 | 5.0 | 1400 | 0.1058 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
| 0.0084 | 6.0 | 1680 | 0.1145 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
| 0.0084 | 7.0 | 1960 | 0.0813 | 0.9852 | 0.9853 | 0.9851 | 0.9658 | 0.9851 |
| 0.0056 | 8.0 | 2240 | 0.1235 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
| 0.0022 | 9.0 | 2520 | 0.0928 | 0.9894 | 0.9895 | 0.9894 | 0.9756 | 0.9894 |
| 0.0022 | 10.0 | 2800 | 0.1079 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
| 0.0019 | 11.0 | 3080 | 0.0796 | 0.9894 | 0.9895 | 0.9894 | 0.9756 | 0.9894 |
| 0.0019 | 12.0 | 3360 | 0.1084 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
| 0.0004 | 13.0 | 3640 | 0.1099 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
| 0.0004 | 14.0 | 3920 | 0.1233 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
| 0.0005 | 15.0 | 4200 | 0.1011 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
blackeys/ppo-LunarLanderV2 | blackeys | 2023-05-04T09:59:58Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-04T09:00:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.64 +/- 22.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
StevenLimcorn/unsup-simcse-roberta-large-semeval2016-restaurants | StevenLimcorn | 2023-05-04T09:55:17Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-05-03T08:33:41Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: unsup-simcse-roberta-large-semeval2016-restaurants
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# unsup-simcse-roberta-large-semeval2016-restaurants
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
leonardosaveri/DSChallenge_Roberta_Base_10Epochs | leonardosaveri | 2023-05-04T09:53:40Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-04T09:03:33Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DSChallenge_Roberta_Base_10Epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DSChallenge_Roberta_Base_10Epochs
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3678
- Accuracy: 0.9428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0769 | 1.0 | 793 | 0.2022 | 0.9410 |
| 0.1081 | 2.0 | 1586 | 0.2630 | 0.9423 |
| 0.0563 | 3.0 | 2379 | 0.2948 | 0.9477 |
| 0.0296 | 4.0 | 3172 | 0.3678 | 0.9428 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
botp/GhostMix_V1.1 | botp | 2023-05-04T09:32:58Z | 4 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-04T09:32:57Z | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
duplicated_from: sakistriker/GhostMix_V1.1
---
This is a safetensors Diffusers conversion of the model: https://civitai.com/models/36520/ghostmix
All credit goes to the original model's author. |
s3nh/Cerebras-GPT-590M-3000steps-polish | s3nh | 2023-05-04T09:22:42Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"pl",
"dataset:databricks/databricks-dolly-15k",
"dataset:s3nh/alpaca-dolly-instruction-only-polish",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-02T14:42:51Z | ---
license: openrail
datasets:
- databricks/databricks-dolly-15k
- s3nh/alpaca-dolly-instruction-only-polish
language:
- pl
---
### Introduction
These repository consist of cerebras/Cerebras-GPT-590M finetuned to Polish language on translated alpaca-dolly dataset.
Main task is to perform accurate answers to instruction asked.
Below you can find an instruction of how to infer with that model.
These repository does not contain an tokenizer object, at the moment (#TODO).
### Evaluation part
```python
import pandas as pd
import torch
from torch.utils.data import AutTokenizer
from typing import List, Dict, Union
from typing import Any, TypeVar
import pandas as pd
import pickle
MODEL_NAME: str = 's3nh/Cerebras-GPT-590M-3000steps-polish'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCasualLM.from_pretrained(MODEL_NAME).cuda()
#Resize model for tokenizer size
n_tokens: int = len(tokenizer)
model.resize_token_embeddings(n_tokens)
def _generate_prompt(instruction, input=None):
if input:
return f"""Poniżej znajduje się instrukcja opisująca zadanie, połączona z danymi wejściowymi, które zapewniają dalszy konktekst. Napisz odpowiedź, która odpowiednio odpowie na pytanie.
### Instruction:
{instruction}
### Input:
{input}
### Response:"""
manual_instruction: str = "Napisz mi proszę jakie są rodzaje telefonów komórkowych"
manual_input: str = "Telefony komórkowe, w przeciwieństwie do np. satelitarnych, charakteryzuje to, że działają w obrębie naziemnych fal radiowych w technologii GSM (i w różnych jej wariantach: 3G, 4G czy niebawem 5G). Zasadniczo można jednak wyróżnić wiele ich rodzajów i podzielić je na różne kryteria. I tak, ze względu na rodzaj obudowy, można mówić o telefonach jednobryłowych, rozsuwanych, obrotowych czy też z klapką. Obecnie jednak najbardziej popularne i – ze względu na posiadane parametry – najlepsze telefony komórkowe to smartfony dotykowe."
print(f"Valueation for {manual_instruction} \n\n\n {manual_input}\n\n")
evaluate(instruction = manual_instruction,
input = manual_input)
``` |
Pietro97/ppo-Huggy | Pietro97 | 2023-05-04T09:15:03Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-05-04T09:14:55Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Pietro97/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
botp/Realistic_Vision_V2.0 | botp | 2023-05-04T09:14:37Z | 4 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-04T09:14:36Z | ---
license: creativeml-openrail-m
duplicated_from: SG161222/Realistic_Vision_V2.0
---
<b>Please read this!</b><br>
For version 2.0 it is recommended to use with VAE (to improve generation quality and get rid of blue artifacts): https://huggingface.co/stabilityai/sd-vae-ft-mse-original<br>
This model is available on <a href="https://www.mage.space/">Mage.Space</a>, <a href="https://sinkin.ai/">Sinkin.ai</a>, <a href="https://getimg.ai/">GetImg.ai</a> and (<a href="https://randomseed.co/">RandomSeed.co</a> - NSFW content)
<hr/>
<b>I use this template to get good generation results:
Prompt:</b>
RAW photo, *subject*, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
<b>Example:</b> RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
<b>Negative Prompt:</b>
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br>
<b>OR</b><br>
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation
<b>Euler A or DPM++ 2M Karras with 25 steps<br>
CFG Scale 3,5 - 7<br>
Hires. fix with Latent upscaler<br>
0 Hires steps and Denoising strength 0.25-0.45<br>
Upscale by 1.1-2.0</b> |
truegpt/truegpt_small | truegpt | 2023-05-04T09:13:36Z | 3 | 0 | transformers | [
"transformers",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-01T14:30:48Z | # TrueGPT Small: AI Model for Action and Empowerment
TrueGPT Small is a lightweight version of the TrueGPT artificial intelligence model, designed for users who need the empowering and actionable features of TrueGPT with reduced computational requirements. By providing actionable solutions and eliminating uncertainty, TrueGPT Small retains the core features of the original TrueGPT while making it accessible to a wider range of devices and systems. With seamless integration to the Hugging Face ecosystem, users can easily utilize TrueGPT Small for various AI applications.
|
coreml-community/coreml-Counterfeit | coreml-community | 2023-05-04T09:11:45Z | 0 | 23 | null | [
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-02-06T22:33:32Z | ---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
# Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# Note #2: "V2.5-VAE" versions have the included counterfiet v2.5 vae embedded.
# Counterfeit:
Source(s): [Hugging Face](https://huggingface.co/gsdf) - [CivitAI](https://civitai.com/models/4468/counterfeit-v25)
Counterfeit is anime style Stable Diffusion model.
DreamBooth + Merge Block Weights + Merge LoRA
# Counterfeit-V2.5 e.g.

```
((masterpiece,best quality)),1girl, solo, animal ears, rabbit, barefoot, knees up, dress, sitting, rabbit ears, short sleeves, looking at viewer, grass, short hair, smile, white hair, puffy sleeves, outdoors, puffy short sleeves, bangs, on ground, full body, animal, white dress, sunlight, brown eyes, dappled sunlight, day, depth of field
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 448x768, Denoising strength: 0.6, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality)),1girl, from below, solo, school uniform, serafuku, sky, cloud, black hair, skirt, sailor collar, looking at viewer, short hair, building, bangs, neckerchief, long sleeves, cloudy sky, power lines, shirt, cityscape, pleated skirt, scenery, blunt bangs, city, night, black sailor collar, closed mouth, black skirt, medium hair, school bag , holding bag
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 832x512, Denoising strength: 0.6, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality)),2girls, black kimono, black legwear, black ribbon, black hair, cherry blossoms, day, flower, hair bun, hair ribbon, japanese clothes, kimono, long hair, looking at viewer, looking back, multiple girls, obi, outdoors, red eyes, red hair, ribbon, sandals, single hair bun, stairs, standing, statue, torii, tree, white kimono, yellow eyes
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 640x960, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality)),1girl, bangs, blue eyes, blurry background, branch, brown hair, dappled sunlight, flower, from side, hair flower, hair ornament, japanese clothes, kimono, leaf, (maple leaf:1.9), obi, outdoors, sash, solo, sunlight, upper body
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality))1girl, solo, black skirt, blue eyes, electric guitar, guitar, headphones, holding, holding plectrum, instrument, long hair, , music, one side up, pink hair, playing guiter, pleated skirt, black shirt, indoors
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality)), 1girl, food, fruit, solo, skirt, shop, indoors, jacket, shopping, basket, jewelry, shirt, shelf, short hair, black hair, plaid skirt, black jacket, dutch angle, yellow eyes, looking at viewer
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent
```
# Counterfeit-V3.0
・I have utilized BLIP-2 as a part of the training process. Natural language prompts might be more effective.
・I prioritize the freedom of composition, which may result in a higher possibility of anatomical errors.
・The expressiveness has been improved by merging with negative values, but the user experience may differ from previous checkpoints.
・I have uploaded a new Negative Embedding, trained with Counterfeit-V3.0.
There's likely no clear superiority or inferiority between this and the previous embedding, so feel free to choose according to your preference.Note that I'm not specifically recommending the use of this embedding.
# Sample image
prompt & Setting: https://civitai.com/models/4468/counterfeit-v30


|
botp/embeddings | botp | 2023-05-04T09:00:01Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-04T08:37:38Z | ---
license: creativeml-openrail-m
duplicated_from: nolanaatama/embeddings
---
DISCLAIMER! This Is A Preservation Repository!
Cloned since __nolanaatama/embeddings__ |
botp/zzeipher-fix6 | botp | 2023-05-04T08:59:19Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-27T09:46:43Z | ---
license: creativeml-openrail-m
duplicated_from: m4gnett/zeipher-f222
---
DISCLAIMER! This Is A Preservation Repository!
Cloned since __m4gnett/zeipher-f222__
This repository is for backuping Zeipher F222.
I downloaded the model last month via torrent.
|
botp/Anything-Preservation | botp | 2023-05-04T08:58:48Z | 149 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-25T02:36:38Z | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
duplicated_from: AdamOswald1/Anything-Preservation
---
DISCLAIMER! This Is A Preservation Repository!
Cloned since __AdamOswald1/Anything-Preservation__
# Anything V3 - Better VAE
Welcome to Anything V3 - Better VAE. It currently has three model formats: diffusers, ckpt, and safetensors. You'll never see a grey image result again. This model is designed to produce high-quality, highly detailed anime-style images with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags for image generation.
e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_**
## Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Anything V3 with Better VAE:
[](https://huggingface.co/spaces/Linaqruf/Linaqruf-anything-v3-better-vae)
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
You should install dependencies below in order to running the pipeline
```bash
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):
```python
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
model_id = "Linaqruf/anything-v3-0-better-vae"
# Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "masterpiece, best quality, illustration, beautiful detailed, finely detailed, dramatic light, intricate details, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden"
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"
with autocast("cuda"):
image = pipe(prompt,
negative_prompt=negative_prompt,
width=512,
height=640,
guidance_scale=12,
num_inference_steps=50).images[0]
image.save("anime_girl.png")
```
## Examples
Below are some examples of images generated using this model:
**Anime Girl:**

**Anime Boy:**

**Scenery:**

## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
# Announcement
For (unofficial) continuation of this model, please visit [andite/anything-v4.0](https://huggingface.co/andite/anything-v4.0). I am aware that the repo exists because I am literally the one who (accidentally) gave the idea to publish his fine-tuned model ([andite/yohan-diffusion](https://huggingface.co/andite/yohan-diffusion)) as a base and merged it with many mysterious model, "hey, let's call it 'Anything V4.0'", because the quality is quite similar to Anything V3 but upgraded.
I also wanted to tell you something. I had a plan to remove/make private one of each repo named "Anything V3":
- [Linaqruf/anything-v3.0](https://huggingface.co/Linaqruf/anything-v3.0/)
- [Linaqruf/anything-v3-better-vae](https://huggingface.co/Linaqruf/anything-v3-better-vae)
Because there are two versions now and I'm late to realize this mysterious non-sense model is already polluted Huggingface Trending for so long, and now when the new repo comes out it is also there. I feel guilty everytime this model is in trending leaderboard.
I prefer to delete/make private this one and let us slowly move to [Linaqruf/anything-v3-better-vae](https://huggingface.co/Linaqruf/anything-v3-better-vae) with better repo management and a better VAE included in the model.
Please share your thoughts in this #133 discussion about whether I should delete this repo or another one, or maybe both of them.
Thanks,
Linaqruf.
---
# Anything V3
Welcome to Anything V3 - a latent diffusion model for weebs. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images.
e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_**
## Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Anything-V3.0:
[Open in Spaces](https://huggingface.co/spaces/akhaliq/anything-v3.0)
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Linaqruf/anything-v3.0"
branch_name= "diffusers"
pipe = StableDiffusionPipeline.from_pretrained(model_id, revision=branch_name, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "pikachu"
image = pipe(prompt).images[0]
image.save("./pikachu.png")
```
## Examples
Below are some examples of images generated using this model:
**Anime Girl:**

```
1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden
Steps: 50, Sampler: DDIM, CFG scale: 12
```
**Anime Boy:**

```
1boy, medium hair, blonde hair, blue eyes, bishounen, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden
Steps: 50, Sampler: DDIM, CFG scale: 12
```
**Scenery:**

```
scenery, shibuya tokyo, post-apocalypse, ruins, rust, sky, skyscraper, abandoned, blue sky, broken window, building, cloud, crane machine, outdoors, overgrown, pillar, sunset
Steps: 50, Sampler: DDIM, CFG scale: 12
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
pkufool/icefall_asr_aishell_pruned_transducer_stateless7_bbpe | pkufool | 2023-05-04T08:39:07Z | 0 | 0 | null | [
"tensorboard",
"license:apache-2.0",
"region:us"
] | null | 2023-05-04T07:25:32Z | ---
license: apache-2.0
---
The results:
|Vocab size | Greedy search(dev & test) | Modified beam search(dev & test) | Fast beam search (dev & test) | Fast beam search LG (dev & test) | comments|
|-- | -- | -- | -- | -- | --|
|500 | 4.31 & 4.59 | 4.25 & 4.54 | 4.27 & 4.55 | 4.07 & 4.38 | --epoch 48 --avg 29|
The training command:
```bash
export CUDA_VISIBLE_DEVICES="4,5,6,7"
./pruned_transducer_stateless7_bbpe/train.py \
--world-size 4 \
--num-epochs 50 \
--start-epoch 1 \
--use-fp16 1 \
--max-duration 800 \
--bpe-model data/lang_bbpe_500/bbpe.model \
--exp-dir pruned_transducer_stateless7_bbpe/exp \
--lr-epochs 6 \
--master-port 12535
```
The decoding command:
```bash
for m in greedy_search modified_beam_search fast_beam_search fast_beam_search_LG; do
./pruned_transducer_stateless7_bbpe/decode.py \
--epoch 48 \
--avg 29 \
--exp-dir ./pruned_transducer_stateless7_bbpe/exp \
--max-sym-per-frame 1 \
--ngram-lm-scale 0.25 \
--ilme-scale 0.2 \
--bpe-model data/lang_bbpe_500/bbpe.model \
--max-duration 2000 \
--decoding-method $m
done
```
|
GoldfieldGeek/ppo-LL2-bad | GoldfieldGeek | 2023-05-04T08:37:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-04T07:51:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.37 +/- 17.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pkufool/icefall_asr_librispeech_conformer_ctc | pkufool | 2023-05-04T08:35:10Z | 0 | 4 | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
license: apache-2.0
language:
- en
---
# Pre-trained Conformer-CTC models for the librispeech dataset with icefall.
The model was trained on full [LibriSpeech](http://openslr.org/12/) with the scripts in [icefall](https://github.com/k2-fsa/icefall).
See (https://github.com/k2-fsa/icefall/pull/13) for more details of this model.
## How to use
See (https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/conformer_ctc/README.md)
## Training procedure
The version of the mainly repositories are list below.
k2: https://github.com/k2-fsa/k2/commit/81cec9ec736d2c603ad75d933bb3e3a3706fb0dd
icefall: https://github.com/k2-fsa/icefall/commit/ef233486ae6d21bacb940de45efb35d0c334605c
lhotse: https://github.com/lhotse-speech/lhotse/commit/5dfe0f4c02b1334ebb7db6d67e1141fe406ca76b
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. It is better to use the given version above, but I think the latest version would be ok. And also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
git checkout ef233486
```
* Preparing data.
```
cd egs/librispeech/ASR
bash ./prepare.sh
```
* Training
```bash
export CUDA_VISIBLE_DEVICES="0,1,2,3"
python conformer_ctc/train.py --bucketing-sampler True \
--concatenate-cuts False \
--max-duration 200 \
--full-libri True \
--world-size 4
```
## Evaluation results
The best decoding results (WERs) on LibriSpeech test-clean and test-other are listed below, we got this results by averaging models from epoch 15 to 34.
||test-clean|test-other|
|--|--|--|
|WER|2.57%|5.94%| |
yemiancheng/like-model | yemiancheng | 2023-05-04T08:27:52Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-04T05:15:02Z | # readme
saving some models i like. i will collect them here for using(downloading) easily.
## why
- [x] sometimes i want to use it but fogget where to download it.
## life guarantee statement
If there is infringement, please temporarily notify me and I will delete it.
my email: `[email protected]` or `[email protected]` |
Aleksandar/electra-srb-ner | Aleksandar | 2023-05-04T08:14:22Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: electra-srb-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: sr
metric:
name: Accuracy
type: accuracy
value: 0.9568394937134688
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3406
- Precision: 0.8934
- Recall: 0.9087
- F1: 0.9010
- Accuracy: 0.9568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3686 | 1.0 | 625 | 0.2108 | 0.8326 | 0.8494 | 0.8409 | 0.9335 |
| 0.1886 | 2.0 | 1250 | 0.1784 | 0.8737 | 0.8713 | 0.8725 | 0.9456 |
| 0.1323 | 3.0 | 1875 | 0.1805 | 0.8654 | 0.8870 | 0.8760 | 0.9468 |
| 0.0675 | 4.0 | 2500 | 0.2018 | 0.8736 | 0.8880 | 0.8807 | 0.9502 |
| 0.0425 | 5.0 | 3125 | 0.2162 | 0.8818 | 0.8945 | 0.8881 | 0.9512 |
| 0.0343 | 6.0 | 3750 | 0.2492 | 0.8790 | 0.8928 | 0.8859 | 0.9513 |
| 0.0253 | 7.0 | 4375 | 0.2562 | 0.8821 | 0.9006 | 0.8912 | 0.9525 |
| 0.0142 | 8.0 | 5000 | 0.2788 | 0.8807 | 0.9013 | 0.8909 | 0.9524 |
| 0.0114 | 9.0 | 5625 | 0.2793 | 0.8861 | 0.9002 | 0.8931 | 0.9534 |
| 0.0095 | 10.0 | 6250 | 0.2967 | 0.8887 | 0.9034 | 0.8960 | 0.9550 |
| 0.008 | 11.0 | 6875 | 0.2993 | 0.8899 | 0.9067 | 0.8982 | 0.9556 |
| 0.0048 | 12.0 | 7500 | 0.3215 | 0.8887 | 0.9038 | 0.8962 | 0.9545 |
| 0.0034 | 13.0 | 8125 | 0.3242 | 0.8897 | 0.9068 | 0.8982 | 0.9554 |
| 0.003 | 14.0 | 8750 | 0.3311 | 0.8884 | 0.9085 | 0.8983 | 0.9559 |
| 0.0025 | 15.0 | 9375 | 0.3383 | 0.8943 | 0.9062 | 0.9002 | 0.9562 |
| 0.0011 | 16.0 | 10000 | 0.3346 | 0.8941 | 0.9112 | 0.9026 | 0.9574 |
| 0.0015 | 17.0 | 10625 | 0.3362 | 0.8944 | 0.9081 | 0.9012 | 0.9567 |
| 0.001 | 18.0 | 11250 | 0.3464 | 0.8877 | 0.9100 | 0.8987 | 0.9559 |
| 0.0012 | 19.0 | 11875 | 0.3415 | 0.8944 | 0.9089 | 0.9016 | 0.9568 |
| 0.0005 | 20.0 | 12500 | 0.3406 | 0.8934 | 0.9087 | 0.9010 | 0.9568 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
SHENMU007/neunit_BASE_V4 | SHENMU007 | 2023-05-04T08:11:57Z | 80 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-05-04T06:13:58Z | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
MartinMarenz/q-Taxiv3-01 | MartinMarenz | 2023-05-04T08:11:51Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-04T08:11:46Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxiv3-01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MartinMarenz/q-Taxiv3-01", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits