modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-03 00:41:34
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 466
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-03 00:34:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
asenella/JMVAE_beta_25_scale_False_seed_0 | asenella | 2023-07-26T14:40:06Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:13:40Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_25_scale_True_seed_0 | asenella | 2023-07-26T14:40:05Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:16:09Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_10_scale_False_seed_2 | asenella | 2023-07-26T14:40:03Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T05:07:55Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_10_scale_False_seed_3 | asenella | 2023-07-26T14:39:56Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T06:25:13Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_5_scale_False_seed_0 | asenella | 2023-07-26T14:39:50Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:02:16Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_5_scale_False_seed_1 | asenella | 2023-07-26T14:39:46Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:26:18Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_5_scale_False_seed_3 | asenella | 2023-07-26T14:39:44Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:32:58Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_5_scale_True_seed_1 | asenella | 2023-07-26T14:39:34Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:25:58Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_5_scale_True_seed_2 | asenella | 2023-07-26T14:39:25Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:37:25Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_5_scale_True_seed_1 | asenella | 2023-07-26T14:39:21Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:05:03Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_5_scale_True_seed_3 | asenella | 2023-07-26T14:39:12Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:10:51Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_5_scale_True_seed_2 | asenella | 2023-07-26T14:39:04Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:04:59Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_5_scale_False_seed_2 | asenella | 2023-07-26T14:38:59Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T19:17:29Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_10_scale_False_seed_2 | asenella | 2023-07-26T14:38:57Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:14:40Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_5_scale_False_seed_3 | asenella | 2023-07-26T14:38:51Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T21:26:35Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_False_seed_3 | asenella | 2023-07-26T14:38:48Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T02:20:19Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_5_scale_False_seed_1 | asenella | 2023-07-26T14:38:44Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T16:47:29Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_10_scale_False_seed_0 | asenella | 2023-07-26T14:38:40Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:01:23Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_5_scale_False_seed_0 | asenella | 2023-07-26T14:38:37Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:23:41Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_False_seed_1 | asenella | 2023-07-26T14:38:34Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:26:35Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_10_scale_False_seed_3 | asenella | 2023-07-26T14:38:34Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:22:32Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_5_scale_True_seed_2 | asenella | 2023-07-26T14:38:30Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T17:15:26Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_True_seed_1 | asenella | 2023-07-26T14:38:19Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:26:30Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_5_scale_True_seed_3 | asenella | 2023-07-26T14:38:16Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:22:54Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_10_scale_True_seed_2 | asenella | 2023-07-26T14:38:06Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:32:43Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_10_scale_False_seed_1 | asenella | 2023-07-26T14:38:05Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T23:03:08Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_25_scale_False_seed_2 | asenella | 2023-07-26T14:37:58Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:09:13Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_25_scale_False_seed_1 | asenella | 2023-07-26T14:37:45Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:31:50Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MVTCAE_beta_10_scale_True_seed_0 | asenella | 2023-07-26T14:37:44Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T21:41:57Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_25_scale_True_seed_2 | asenella | 2023-07-26T14:37:30Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:38:26Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_10_scale_False_seed_2 | asenella | 2023-07-26T14:37:24Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-26T10:21:45Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_10_scale_False_seed_1 | asenella | 2023-07-26T14:37:09Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T16:51:28Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_10_scale_True_seed_2 | asenella | 2023-07-26T14:37:00Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T18:28:03Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/MoPoE_beta_25_scale_True_seed_1 | asenella | 2023-07-26T14:36:55Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T01:25:53Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_25_scale_False_seed_1 | asenella | 2023-07-26T14:36:25Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T16:40:52Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JMVAE_beta_5_scale_False_seed_0 | asenella | 2023-07-26T14:36:24Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T04:12:17Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_25_scale_False_seed_3 | asenella | 2023-07-26T14:36:16Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T21:52:15Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/JNF_beta_25_scale_False_seed_0 | asenella | 2023-07-26T14:36:09Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-13T16:47:33Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
ankush-003/nosql-identifier-electra | ankush-003 | 2023-07-26T14:27:07Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:bhadresh-savani/electra-base-emotion",
"base_model:finetune:bhadresh-savani/electra-base-emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-26T14:17:15Z | ---
license: apache-2.0
base_model: bhadresh-savani/electra-base-emotion
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nosql-identifier-electra
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nosql-identifier-electra
This model is a fine-tuned version of [bhadresh-savani/electra-base-emotion](https://huggingface.co/bhadresh-savani/electra-base-emotion) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2416
- Accuracy: 0.925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.5918 | 0.725 |
| No log | 2.0 | 80 | 0.5009 | 0.85 |
| No log | 3.0 | 120 | 0.3760 | 0.9 |
| No log | 4.0 | 160 | 0.4031 | 0.8 |
| No log | 5.0 | 200 | 0.3600 | 0.875 |
| No log | 6.0 | 240 | 0.2552 | 0.925 |
| No log | 7.0 | 280 | 0.2876 | 0.9 |
| No log | 8.0 | 320 | 0.2684 | 0.9 |
| No log | 9.0 | 360 | 0.2454 | 0.95 |
| No log | 10.0 | 400 | 0.2540 | 0.925 |
| No log | 11.0 | 440 | 0.2622 | 0.9 |
| No log | 12.0 | 480 | 0.2397 | 0.95 |
| 0.3893 | 13.0 | 520 | 0.2355 | 0.925 |
| 0.3893 | 14.0 | 560 | 0.2432 | 0.925 |
| 0.3893 | 15.0 | 600 | 0.2416 | 0.925 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.11.0
|
petergriger/ppo_lunar_lander | petergriger | 2023-07-26T14:10:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-26T14:10:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.52 +/- 19.37
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
chinhon/pegasus-newsroom-commentaries_hdwriter | chinhon | 2023-07-26T14:08:43Z | 110 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-newsroom-commentaries_hdwriter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-commentaries_hdwriter
This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5316
- Rouge1: 21.4079
- Rouge2: 6.2399
- Rougel: 16.6644
- Rougelsum: 17.8501
- Gen Len: 34.4111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6327 | 1.0 | 4710 | 2.5474 | 20.9392 | 6.1702 | 16.3859 | 17.5963 | 35.6626 |
| 2.4322 | 2.0 | 9420 | 2.5198 | 21.4026 | 6.1811 | 16.5874 | 17.8207 | 34.5976 |
| 2.2703 | 3.0 | 14130 | 2.5316 | 21.4079 | 6.2399 | 16.6644 | 17.8501 | 34.4111 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
oscar-corpus/harmful-kenlms | oscar-corpus | 2023-07-26T14:06:33Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-03-09T18:02:14Z | ---
license: apache-2.0
---
These are KenLM models trained on all the content tagged as `adult` on OSCAR 22.01.
Further documentation is coming soon. |
Za88yes/Cis | Za88yes | 2023-07-26T14:05:06Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-26T13:44:30Z | ---
license: creativeml-openrail-m
---
|
dfomin/dqn-SpaceInvadersNoFrameskip-v4 | dfomin | 2023-07-26T14:04:04Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-23T09:32:13Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 643.50 +/- 266.37
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dfomin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dfomin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dfomin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
IlyaGusev/saiga_13b_ggml | IlyaGusev | 2023-07-26T14:01:35Z | 0 | 18 | null | [
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"region:us"
] | text-generation | 2023-05-15T21:11:24Z | ---
datasets:
- IlyaGusev/ru_turbo_alpaca
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
language:
- ru
inference: false
pipeline_tag: conversational
---
Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga_13b_lora).
* Download one of the versions, for example `ggml-model-q4_1.bin`.
* Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
How to run:
```
sudo apt-get install git-lfs
pip install llama-cpp-python fire
python3 interact_llamacpp.py ggml-model-q4_1.bin
```
System requirements:
* 18GB RAM for q8_0
* 13GB RAM for q4_1 |
tatoy/llama2-qlora-finetunined-french | tatoy | 2023-07-26T13:51:39Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T13:51:22Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
IlyaGusev/saiga_30b_ggml | IlyaGusev | 2023-07-26T13:40:13Z | 0 | 21 | null | [
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"region:us"
] | text-generation | 2023-04-27T22:44:41Z | ---
datasets:
- IlyaGusev/ru_turbo_alpaca
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
language:
- ru
inference: false
pipeline_tag: conversational
---
Llama.cpp compatible version of an original [30B model](https://huggingface.co/IlyaGusev/saiga_30b_lora).
* Download `ggml-model-q4_1.bin`.
* Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
How to run:
```
sudo apt-get install git-lfs
pip install llama-cpp-python fire
python3 interact_llamacpp.py ggml-model-q4_1.bin
```
System requirements:
* 32GB RAM
* CPU with 4 cores
|
reginaboateng/pfeiffer_Scibert_bert_adapter_ner_pico_for_classification_task | reginaboateng | 2023-07-26T13:36:42Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"adapterhub:pico_ner",
"bert",
"dataset:reginaboateng/cleaned_ebmnlp_pico",
"region:us"
] | null | 2023-07-26T13:36:40Z | ---
tags:
- adapter-transformers
- adapterhub:pico_ner
- bert
datasets:
- reginaboateng/cleaned_ebmnlp_pico
---
# Adapter `reginaboateng/pfeiffer_Scibert_bert_adapter_ner_pico_for_classification_task` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/pfeiffer_Scibert_bert_adapter_ner_pico_for_classification_task", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
sughu/llama2-qlora-finetunined-french | sughu | 2023-07-26T13:33:52Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T13:33:44Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
sukiee/qlora-koalpaca-polyglot-5.8b-hotissue_v2 | sukiee | 2023-07-26T13:15:08Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-25T17:02:21Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
dhinman/Reinforce-Pixelcopter-124000 | dhinman | 2023-07-26T13:08:45Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-26T13:08:38Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-124000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.90 +/- 28.77
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mgigena/roberta-large-cuad | mgigena | 2023-07-26T12:48:33Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"legal-contract-review",
"cuad",
"en",
"dataset:cuad",
"arxiv:2103.06268",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-26T12:38:58Z | ---
language:
- en
license: cc-by-4.0
datasets:
- cuad
pipeline_tag: question-answering
tags:
- legal-contract-review
- roberta
- cuad
library_name: transformers
---
# Model Card for roberta-large-cuad
# Model Details
## Model Description
- **Developed by:** Hendrycks et al.
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** cc-by-4.0
- **Related Models:**
- **Parent Model:** RoBERTa
- **Resources for more information:**
- GitHub Repo: [TheAtticusProject](https://github.com/TheAtticusProject/cuad)
- Associated Paper: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- Project website: [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad)
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data, Procedure, Preprocessing, etc.
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
### Software
Python, Transformers
# Citation
**BibTeX:**
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={NeurIPS},
year={2021}
}
```
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("mgigena/roberta-large-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("mgigena/roberta-large-cuad")
```
</details> |
Jonathaniu/llama2-breast-cancer-7b-knowledge-epoch-5 | Jonathaniu | 2023-07-26T12:25:16Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T12:25:02Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
liuyt75/t5-base_prefix_tuning_sentences_75agree_3 | liuyt75 | 2023-07-26T12:21:13Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T07:52:26Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
FelixChao/vicuna-7b-instruct-ft-adapters-physics | FelixChao | 2023-07-26T12:20:04Z | 5 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T12:20:00Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
abukashan/code-llama | abukashan | 2023-07-26T12:13:06Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T12:12:24Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
ybelkada/opt-350m-ia3 | ybelkada | 2023-07-26T12:12:15Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T12:12:14Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Tommert25/multibertfinetuned1107 | Tommert25 | 2023-07-26T11:57:13Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-11T10:18:39Z | ---
license: apache-2.0
base_model: bert-base-multilingual-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: multibertfinetuned1107
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multibertfinetuned1107
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5977
- Precision: 0.6463
- Recall: 0.6078
- F1: 0.6264
- Accuracy: 0.8835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 145 | 0.6113 | 0.6550 | 0.5854 | 0.6182 | 0.8735 |
| No log | 2.0 | 290 | 0.6457 | 0.6270 | 0.5659 | 0.5949 | 0.8705 |
| No log | 3.0 | 435 | 0.5977 | 0.6463 | 0.6078 | 0.6264 | 0.8835 |
| 0.1409 | 4.0 | 580 | 0.6095 | 0.6752 | 0.6449 | 0.6597 | 0.8865 |
| 0.1409 | 5.0 | 725 | 0.6566 | 0.6680 | 0.6380 | 0.6527 | 0.8851 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
liuyt75/t5-base_prefix_tuning_sentences_66agree_3 | liuyt75 | 2023-07-26T11:51:20Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T08:17:11Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
mgigena/roberta-base-cuad | mgigena | 2023-07-26T11:46:25Z | 123 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"legal-contract-review",
"cuad",
"en",
"dataset:cuad",
"arxiv:2103.06268",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-26T03:47:20Z | ---
language:
- en
license: cc-by-4.0
datasets:
- cuad
pipeline_tag: question-answering
tags:
- legal-contract-review
- roberta
- cuad
library_name: transformers
---
# Model Card for roberta-base-cuad
# Model Details
## Model Description
- **Developed by:** Hendrycks et al.
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** cc-by-4.0
- **Related Models:**
- **Parent Model:** RoBERTa
- **Resources for more information:**
- GitHub Repo: [TheAtticusProject](https://github.com/TheAtticusProject/cuad)
- Associated Paper: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- Project website: [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad)
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data, Procedure, Preprocessing, etc.
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
### Software
Python, Transformers
# Citation
**BibTeX:**
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={NeurIPS},
year={2021}
}
```
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("mgigena/cuad-roberta-base")
model = AutoModelForQuestionAnswering.from_pretrained("mgigena/cuad-roberta-base")
```
</details> |
mgigena/deberta-v2-xlarge-cuad | mgigena | 2023-07-26T11:45:37Z | 113 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"question-answering",
"legal-contract-review",
"cuad",
"en",
"dataset:cuad",
"arxiv:2103.06268",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-26T04:02:51Z | ---
language:
- en
license: cc-by-4.0
datasets:
- cuad
pipeline_tag: question-answering
tags:
- legal-contract-review
- cuad
library_name: transformers
---
# Model Card for deberta-v2-xlarge-cuad
# Model Details
## Model Description
- **Developed by:** Hendrycks et al.
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** cc-by-4.0
- **Related Models:**
- **Parent Model:** DeBERTa-v2
- **Resources for more information:**
- GitHub Repo: [TheAtticusProject](https://github.com/TheAtticusProject/cuad)
- Associated Paper: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- Project website: [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad)
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data, Procedure, Preprocessing, etc.
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
### Software
Python, Transformers
# Citation
**BibTeX:**
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={NeurIPS},
year={2021}
}
```
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("mgigena/cuad-deberta-v2-xlarge")
model = AutoModelForQuestionAnswering.from_pretrained("mgigena/cuad-deberta-v2-xlarge")
```
</details> |
karinthommen/whisper-V4-small-3 | karinthommen | 2023-07-26T11:36:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-26T07:48:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-V4-small-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-V4-small-3
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
arkii/chatglm2-6b-ggml | arkii | 2023-07-26T11:30:13Z | 0 | 6 | transformers | [
"transformers",
"llama-cpp",
"gglm",
"chatglm-6b",
"en",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-14T07:08:10Z | ---
license: apache-2.0
language:
- en
- zh
library_name: transformers
tags:
- llama-cpp
- gglm
- chatglm-6b
---
# ChatGLM2-6B-GGLM
- [Q4_0](chatglm2-6b-ggml.q4_0.bin)
- [Q4_1](chatglm2-6b-ggml.q4_1.bin)
- [Q8_0](chatglm2-6b-ggml.q8_0.bin)
Can be loaded and used by [chatglm.cpp](https://github.com/li-plus/chatglm.cpp)
## Integrated with langchain framework in CustomLLM form
[chatglm_langchain.py](chatglm_langchain.py)
``` shell
python chatglm_langchain.py
Prompt: 小明的妈妈有两个孩子,一个叫大明 另外一个叫什么?
小明的妈妈有两个孩子,一个叫大明,另外一个叫小明。
```
## Use under the shell command line
``` shell
chatglm -m ./models/chatglm2-6b-ggml.q8_0.bin -p 你好
```
 |
liuyt75/t5-base_prefix_tuning_sentences_50agree_5 | liuyt75 | 2023-07-26T11:22:12Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T09:01:17Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
chefkoch24/justification-cue-tagging | chefkoch24 | 2023-07-26T11:11:36Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"entity_recognition",
"en",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-01-04T14:56:34Z | ---
license: openrail
language:
- en
pipeline_tag: token-classification
tags:
- entity_recognition
widget:
- text: "Barack Hussein Obama II (born August 4, 1961) is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, he was the first African-American president of the United States.[2] Obama previously served as a U.S. senator representing Illinois from 2005 to 2008 and as an Illinois state senator from 1997 to 2004, and worked as a civil rights lawyer and university lecturer."
- text: "Donald John Trump (born June 14, 1946) is an American politician, media personality, and businessman who served as the 45th president of the United States from 2017 to 2021. Trump graduated from the University of Pennsylvania with a bachelor's degree in economics in 1968. He became president of his father's real-estate business in 1971 and renamed it the Trump Organization. He expanded its operations to building and renovating skyscrapers, hotels, casinos, and golf courses and later started side ventures, mostly by licensing his name. From 2004 to 2015, he co-produced and hosted the reality television series The Apprentice. He and his businesses have been plaintiff or defendant in more than 4,000 state and federal legal actions, including six bankruptcies."
---
A side product of my master's thesis about Automatic Short Answer Grading.
Trained on respective ASAG datasets with English answers for identifying justification cues.
Works relatively well for the respective extraction of texts like provided in the examples.
No future development for this model is planned. |
ZinengTang/Udop | ZinengTang | 2023-07-26T11:06:50Z | 0 | 22 | null | [
"arxiv:2212.02623",
"license:mit",
"region:us"
] | null | 2023-01-22T01:47:26Z | ---
license: mit
---
# [Unifying Vision, Text, and Layout for Universal Document Processing (CVPR 2023 Highlight)](https://arxiv.org/pdf/2212.02623)
[Zineng Tang](https://zinengtang.github.io/),
[Ziyi Yang](https://ziyi-yang.github.io/),
[Guoxin Wang](https://www.guoxwang.com/),
[Yuwei Fang](https://www.microsoft.com/en-us/research/people/yuwfan/),
[Yang Liu](https://nlp-yang.github.io/),
[Chenguang Zhu](https://cs.stanford.edu/people/cgzhu/),
[Michael Zeng](https://www.microsoft.com/en-us/research/people/nzeng/),
[Cha Zhang](https://www.microsoft.com/en-us/research/people/chazhang/),
[Mohit Bansal](https://www.cs.unc.edu/~mbansal/)
Open Source Checklist:
- [x] Release Model (Encoder + Text decoder)
- [x] Release Most Scripts
- [ ] Vision Decoder / Weights (Due to fake document generation ethical consideration, we plan to release this functionality as an Azure API)
- [x] Demo
## Introduction
UDOP unifies vision, text, and layout through vision-text-layout Transformer and unified generative pretraining tasks including
vision task, text task, layout task, and mixed task. We show the task prompts (left) and task targets (right) for all self-supervised objectives
(joint text-layout reconstruction, visual text recognition, layout modeling, and masked autoencoding) and two example supervised objectives
(question answering and layout analysis).
## Install
### Setup `python` environment
```
conda create -n UDOP python=3.8 # You can also use other environment.
```
### Install other dependencies
```
pip install -r requirements.txt
```
## Run Scripts
Switch model type by:
--model_type "UdopDual"
--model_type "UdopUnimodel"
### Finetuninng on RVLCDIP
Download RVLCDIP first and change the path
For OCR, you might need to customize your code
```
bash scripts/finetune_rvlcdip.sh # Finetuning on RVLCDIP
```
### Finetuninng on DUE Benchmark
Download [Duebenchmark](https://github.com/due-benchmark/baselines) and follow its procedure to preprocess the data.
The training code adapted to our framework is hosted at benchmarker by running:
```
bash scripts/finetune_duebenchmark.sh # Finetuning on DUE Benchmark, Switch tasks by changing path to the dataset
```
Evaluation of the output generation can be evaluated by [Duebenchmark due_evaluator](https://github.com/due-benchmark/evaluator)
### Model Checkpoints
The model checkpoints are hosted here [Huggingface Hub](https://huggingface.co/ZinengTang/Udop)
## Citation
```
@article{tang2022unifying,
title={Unifying Vision, Text, and Layout for Universal Document Processing},
author={Tang, Zineng and Yang, Ziyi and Wang, Guoxin and Fang, Yuwei and Liu, Yang and Zhu, Chenguang and Zeng, Michael and Zhang, Cha and Bansal, Mohit},
journal={arXiv preprint arXiv:2212.02623},
year={2022}
}
```
## Contact
Zineng Tang ([email protected])
|
daniyal214/finetuned-blip-chest-xrays | daniyal214 | 2023-07-26T10:53:16Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"blip",
"image-text-to-text",
"image-captioning",
"image-to-text",
"en",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-07-25T17:09:01Z | ---
language:
- en
pipeline_tag: image-to-text
license: bsd-3-clause
tags:
- image-captioning
--- |
chefkoch24/weak-ingredient-recognition-bert-base-cased-german | chefkoch24 | 2023-07-26T10:52:09Z | 131 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"recipe",
"cooking",
"entity_recognition",
"de",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-25T17:45:16Z | ---
license: openrail
language:
- de
metrics:
- f1
- accuracy
- precision
- recall
pipeline_tag: token-classification
tags:
- recipe
- cooking
- entity_recognition
widget:
- text: '500 g Pellkartoffeln, mehlig, gekocht, 375 g Quark (Magerstufe), 150 g Mehl, 65 g Zucker, 1 Prise(n) Salz, 1 Ei(er), Öl, z.B. Sonnenblumenöl zum Braten, Mehl, zum Bestäuben, Apfelmus, Zucker, zum Bestreuen Pellkartoffeln pellen und mit einer Kartoffelpresse zerdrücken. Quark, Mehl, Zucker, Salz und Ei zufügen. Alles zusammen zu einem geschmeidigen Teig verarbeiten. Der Teig darf nicht zu feucht sein und an den Händen kleben bleiben, sonst noch etwas Mehl zufügen. Der Teig darf aber auch nicht zu fest sein, er muß locker bleiben. Vom Teig werden dann handtellergroße, flache, ovale Quarkkeulchen geformt, die vorerst auf einem mit Mehl bestreutem Brett abgelegt werden. Die obere Seite der Quarkkeulchen wird noch mit etwas Mehl bestäubt. Die Quarkkeulchen im heißen Sonnenblumenöl von beiden Seiten goldbraun braten. Sie werden noch heiss mit Zucker bestreut oder mit viel Apfelmus bestrichen gegessen.'
- text: '100 g Mehl, 100 g Grieß (Hartweizengrieß), 100 ml Wasser, kaltes, 400 g Kürbisfleisch, (vornehmlich Hokkaido), 1 EL Butter, 1 kleine Zwiebel(n), Salz und Pfeffer, 60 g Parmesan, frisch gerieben, 1 Eigelb, Muskat, 50 g Butter, 8 Blätter Salbei Mehl, Grieß und Wasser zu einem geschmeidigen Teig verarbeiten und mit Klarsichtfolie eingewickelt 1 Stunde im Kühlschrank ruhen lassen. In der Zwischenzeit Kürbis putzen und in Würfel schneiden. Butter zerlassen und die gewürfelte Zwiebel darin glasig braten. Kürbiswürfel dazugeben, salzen und pfeffern und ganz weich kochen. Aber ohne Deckel - das Kürbiswasser muss verdunsten können.Der Kürbis ist perfekt, wenn eine festere Püreemasse im Topf ist. Das dauert ca. 20 Min. Danach den Parmesan und das Eigelb unterheben. Mit einem Hauch Muskatnuss abschmecken.Nudelteig ausrollen und die Ravioli füllen. In Salzwasser ca. 2-4 Min. garen. Abtropfen lassen und warm halten. Butter in einer kleinen Pfanne erhitzen und die Salbeiblätter bei milder Hitze darin braten. Mit etwas Salz und Pfeffer sowie ein bis zwei Tropfen Zitronensaft abschmecken. Über die Ravioli geben und mit einigen Parmesanspänen servieren'
---
Weakly supervised token classification model for German recipe texts based on bert-base-german-cased.
Code available: https://github.com/chefkoch24/weak-ingredient-recognition
Dataset: https://www.kaggle.com/datasets/sterby/german-recipes-dataset
Recognizes the following entities:<br>
'O': 0, <br>
'B-INGREDIENT': 1,<br>
'I-INGREDIENT': 2,<br>
'B-UNIT': 3,<br>
'I-UNIT': 4,<br>
'B-QUANTITY': 5,<br>
'I-QUANTITY': 6<br>
**Training:** <br>
epochs: 2<br>
optimizer: Adam<br>
learning rate: 2e-5<br>
max length: 512<br>
batch size: 8<br>
recipes: 7801<br>
The model was trained on single Geforce RTX2080 with 11GB GPU
**Metrics on test set (weakly supervised):** <br>
accuracy_token 0.9965656995773315<br>
f1_token 0.9965656995773315<br>
precision_token 0.9965656995773315<br>
recall_token 0.9965656995773315<br> |
turkmen/dipperTR | turkmen | 2023-07-26T10:50:36Z | 0 | 0 | null | [
"dataset:turkmen/dipperTR",
"license:openrail",
"region:us"
] | null | 2023-07-26T10:46:04Z | ---
license: openrail
datasets:
- turkmen/dipperTR
---
turkmen/dipperTR Bu data setteki 3 adet veri ile eğitilmiştir. Toplam uzunluğu 9dk 24sn olan 3 veri ile eğitilmiştir. 250 epoch kullanılmıştır.
|
DucHaiten/DucHaiten-AnimeFurry | DucHaiten | 2023-07-26T10:50:00Z | 4 | 2 | diffusers | [
"diffusers",
"art",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-15T11:27:02Z | ---
license: creativeml-openrail-m
pipeline_tag: text-to-image
library_name: diffusers
tags:
- art
---
cute furry
illustration, 3d, 2d, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, vaginas in breasts, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error, bad image, bad photo










 |
bluepen5805/blue_pencil | bluepen5805 | 2023-07-26T10:43:19Z | 0 | 225 | null | [
"stable-diffusion",
"text-to-image",
"ja",
"en",
"license:other",
"region:us"
] | text-to-image | 2023-02-11T10:35:41Z | ---
license: other
language:
- ja
- en
tags:
- stable-diffusion
- text-to-image
---
<div class="flex justify-center">
<div class="container p-0 w-100">
<img class="mt-0 object-cover rounded-t-lg w-100"
style="height: 320px;"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/header.jpg"
width="100%"/>
<div class="flex px-4">
<div class="flex-auto">
<h1 class="mb-2 text-3xl font-bold leading-tight" style="color: rgb(56 189 248/var(--tw-text-opacity));">
blue_pencil
<a href="https://huggingface.co/bluepen5805/blue_pencil/blob/main/README.md" class="ml-2 inline-block">
<svg xmlns="http://www.w3.org/2000/svg" class="h-5 w-5" fill="none" viewBox="0 0 24 24" strokeWidth={1.5} stroke="currentColor" className="w-6 h-6">
<path strokeLinecap="round" strokeLinejoin="round" d="M3.75 3.75v4.5m0-4.5h4.5m-4.5 0L9 9M3.75 20.25v-4.5m0 4.5h4.5m-4.5 0L9 15M20.25 3.75h-4.5m4.5 0v4.5m0-4.5L15 9m5.25 11.25h-4.5m4.5 0v-4.5m0 4.5L15 15" />
</svg>
</a>
</h1>
<p class="mb-4 text-base text-neutral-600 dark:text-neutral-200">
A series of merged models that is just a messy merge of various models.
</p>
</div>
<div class="flex gap-2" style="height: fit-content;">
<a
href="https://twitter.com/blue_pen5805"
class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md"
style="background-color: #1da1f2">
<svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" />
</svg>
</a>
<a
href="https://discord.gg/s49mASQHkE"
class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md"
style="background-color: #7289da">
<svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewbox="0 0 24 24">
<path d="M19.54 0c1.356 0 2.46 1.104 2.46 2.472v21.528l-2.58-2.28-1.452-1.344-1.536-1.428.636 2.22h-13.608c-1.356 0-2.46-1.104-2.46-2.472v-16.224c0-1.368 1.104-2.472 2.46-2.472h16.08zm-4.632 15.672c2.652-.084 3.672-1.824 3.672-1.824 0-3.864-1.728-6.996-1.728-6.996-1.728-1.296-3.372-1.26-3.372-1.26l-.168.192c2.04.624 2.988 1.524 2.988 1.524-1.248-.684-2.472-1.02-3.612-1.152-.864-.096-1.692-.072-2.424.024l-.204.024c-.42.036-1.44.192-2.724.756-.444.204-.708.348-.708.348s.996-.948 3.156-1.572l-.12-.144s-1.644-.036-3.372 1.26c0 0-1.728 3.132-1.728 6.996 0 0 1.008 1.74 3.66 1.824 0 0 .444-.54.804-.996-1.524-.456-2.1-1.416-2.1-1.416l.336.204.048.036.047.027.014.006.047.027c.3.168.6.3.876.408.492.192 1.08.384 1.764.516.9.168 1.956.228 3.108.012.564-.096 1.14-.264 1.74-.516.42-.156.888-.384 1.38-.708 0 0-.6.984-2.172 1.428.36.456.792.972.792.972zm-5.58-5.604c-.684 0-1.224.6-1.224 1.332 0 .732.552 1.332 1.224 1.332.684 0 1.224-.6 1.224-1.332.012-.732-.54-1.332-1.224-1.332zm4.38 0c-.684 0-1.224.6-1.224 1.332 0 .732.552 1.332 1.224 1.332.684 0 1.224-.6 1.224-1.332 0-.732-.54-1.332-1.224-1.332z" />
</svg>
</a>
<a
href="https://github.com/blue-pen5805"
class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md"
style="background-color: #333">
<svg xmlns="http://www.w3.org/2000/svg" class="h-4 w-4" fill="currentColor" viewBox="0 0 24 24">
<path d="M12 0c-6.626 0-12 5.373-12 12 0 5.302 3.438 9.8 8.207 11.387.599.111.793-.261.793-.577v-2.234c-3.338.726-4.033-1.416-4.033-1.416-.546-1.387-1.333-1.756-1.333-1.756-1.089-.745.083-.729.083-.729 1.205.084 1.839 1.237 1.839 1.237 1.07 1.834 2.807 1.304 3.492.997.107-.775.418-1.305.762-1.604-2.665-.305-5.467-1.334-5.467-5.931 0-1.311.469-2.381 1.236-3.221-.124-.303-.535-1.524.117-3.176 0 0 1.008-.322 3.301 1.23.957-.266 1.983-.399 3.003-.404 1.02.005 2.047.138 3.006.404 2.291-1.552 3.297-1.23 3.297-1.23.653 1.653.242 2.874.118 3.176.77.84 1.235 1.911 1.235 3.221 0 4.609-2.807 5.624-5.479 5.921.43.372.823 1.102.823 2.222v3.293c0 .319.192.694.801.576 4.765-1.589 8.199-6.086 8.199-11.386 0-6.627-5.373-12-12-12z" />
</svg>
</a>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<div class="px-2">
<table class="table-auto">
<tbody>
<tr>
<td colspan="2">
<a href="#blue_pencil-EX">blue_pencil-EX (@20230726)</a>
</td>
</tr>
<tr>
<td colspan="2">
<a href="#blue_pencil-v10">blue_pencil-v10 (@20230627v2)</a>
</td>
</tr>
<tr>
<td><a href="#blue_pencil-v9b">blue_pencil-v9b (@20230602)</a></td>
<td><a href="#blue_pencil-v9">blue_pencil-v9 (@20230526v4)</a></td>
</tr>
<tr>
<td colspan="2">
<a href="#blue_pencil-v8">blue_pencil-v8 (@20230507)</a>
</td>
</tr>
<tr>
<td colspan="2">
<a href="#blue_pencil-v7">blue_pencil-v7 (@20230414)</a>
</td>
</tr>
<tr>
<td colspan="2">
<a href="#blue_pencil-v6">blue_pencil-v6 (@20230327)</a>
</td>
</tr>
<tr>
<td><a href="#blue_pencil-v5">blue_pencil-v5b (@20230314)</a></td>
<td><a href="#blue_pencil-v5">blue_pencil-v5 (@20230310)</a></td>
</tr>
<tr>
<td colspan="2">
<a href="#blue_pencil-v4">blue_pencil-v4 (@20230227)</a>
</td>
</tr>
<tr>
<td colspan="2">
<a href="#blue_pencil-v3">blue_pencil-v3 (@20230223)</a>
</td>
</tr>
<tr>
<td><a href="#blue_pencil-v2">blue_pencil-v2b (@20230219)</a></td>
<td><a href="#blue_pencil-v2">blue_pencil-v2 (@20230217)</a></td>
</tr>
<tr>
<td><a href="#blue_pencil-v1">blue_pencil-v1b (@20230212)</a></td>
<td><a href="#blue_pencil-v1">blue_pencil-v1 (@20230211)</a></td>
</tr>
</tbody>
</table>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-EX" class="mt-0 text-2xl">
<code>blue_pencil-EX</code> <small>(<code>@20230726</code>)</small>
</h3>
<div>
おまけです。かなり扱いにくいとおもいます。v10かv8を使ってください。
</div>
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
CreativeML OpenRAIL-M ライセンス / CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
</h3>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v10" class="mt-0 text-2xl">
<code>blue_pencil-v10</code> <small>(<code>@20230627v2</code>)</small>
</h3>
<div>
かわいいね。<br/>
ネガティブプロンプトは `(worst quality, bad quality:2.0)` だけで大丈夫です(EasyNegative とかを使ってもいいけどね)
</div>
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
CreativeML OpenRAIL-M ライセンス / CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
</h3>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v9b" class="mt-0 text-2xl">
<code>blue_pencil-v9b</code> <small>(<code>@20230602</code>)</small>
</h3>
<div>
<div>
v9 をベースに目の輝きを増やし、ついでに気分で調整を施したモデルです。<br/>
短いプロンプトでも安定した出力が得られるような気がしています(女の子以外も)<br/>
ネガティブプロンプトは `(worst quality, bad quality:2.0)` だけでいい気がしています。
</div>
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
CreativeML OpenRAIL-M ライセンス / CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 レシピ / Recipe</h4>
<div class="px-2">
<div class="border p-2">
<details>
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>Method</th>
<th>Weight</th>
<th>OUTPUT</th>
</tr>
</thead>
<tbody>
<tr>
<td>@20230526-9</td>
<td colspan="4">LoRAs(
<a href="https://civitai.com/models/81346">hohoaka</a>:-1.25,
<a href="https://civitai.com/models/81345">agomaru</a>:1.5,
<a href="https://civitai.com/models/81321">faceage</a>:0.5,
<a href="https://civitai.com/models/69207">colored-eyelashes</a>:0.3,
<a href="https://civitai.com/models/81928">fantasy world lora</a>:0.05,
<a href="https://civitai.com/models/58764">FuturisticScapes-v2</a>:0.35,
<a href="https://huggingface.co/ashen-sensored/mzpikas_tmnd_enhanced">Silicon-landscape-isolation</a>:0.35,
<a href="https://civitai.com/models/81378">sanDka1</a>:-0.5,
<a href="https://huggingface.co/ashen-sensored/lora-isolation-collection">pikas-lighting-isolation</a>:0.3,
<a href="https://civitai.com/models/81360">saturation</a>:0.15
)
</td>
<td>@20230602-LoRA</td>
</tr>
<tr>
<td>@20230602-LoRA</td>
<td><a href="https://civitai.com/models/77751">NextGenMix</a></td>
<td></td>
<td>MBW</td>
<td>
1,1,0,0,0,0,0,0,1,1,0,0,<br/>
0,<br/>
1,1,0,0,0,0,0,0,0,0,1,1<br/>
Base alpha 1</td>
<td>blue_pencil-v9b</td>
</tr>
</tbody>
</table>
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://civitai.com/models/22354/clearvae">ClearVAE</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v9b/1.webp"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
cute girl and cat. cute clothes, skirt, squatting, torrential rain, umbrella
Negative prompt: (worst quality, bad quality:2.0)
Steps: 29
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 1858548488
Size: 768x768
Denoising strength: 0.5
Hires upscale: 2
Hires steps: 20
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v9b/2.webp"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
ancient dragon, darkness
Negative prompt: (worst quality, bad quality:2.0)
Steps: 22
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 4099565158
Size: 768x768
Denoising strength: 0.5
Hires upscale: 2
Hires steps: 15
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v9b/3.webp"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, cute clothes, city, cyberpunk
Negative prompt: (worst quality, bad quality:2.0)
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 3260935736
Size: 768x768
Denoising strength: 0.6
Hires upscale: 2
Hires upscaler: Latent (nearest-exact)
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v9b/4.webp"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
smiling old man, cigar, sunglasses, holding teddybear, (darkness:1.3)
Negative prompt: (worst quality, bad quality:2.0)
Steps: 20
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 1270347467
Size: 768x768
Denoising strength: 0.55
Hires upscale: 2
Hires upscaler: Latent (nearest-exact)
</pre>
</div>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v9" class="mt-0 text-2xl">
<code>blue_pencil-v9</code> <small>(<code>@20230526v4</code>)</small>
</h3>
<div>
<div class="font-bold">
※このモデルは実験的なモデルです。安定したモデルを使いたい場合は v8 か v7 をご利用ください。
</div>
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
CreativeML OpenRAIL-M ライセンス / CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 レシピ / Recipe</h4>
<div class="px-2">
<div class="border p-2">
<details>
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>Method</th>
<th>Weight</th>
<th>OUTPUT</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://huggingface.co/Korakoe/OpenNiji-V2">OpenNiji-V2</a></td>
<td><a href="https://huggingface.co/prompthero/openjourney-v4">openjourney-v4</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference(smoothAdd)</td>
<td>1</td>
<td>@MidNijiV4V2</td>
</tr>
<tr>
<td><a href="https://huggingface.co/NoCrypt/SomethingV2">somethingv2_1</a></td>
<td><a href="https://civitai.com/models/29819">neatnessFluffyFurMix_nebula</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference(smoothAdd)</td>
<td>0.25</td>
<td>@20230526-0</td>
</tr>
<tr>
<td>@20230526-0</td>
<td><a href="https://huggingface.co/Vsukiyaki/ShiratakiMix">ShiratakiMix-fixed</a></td>
<td></td>
<td>Weighted sum(cosineA)</td>
<td>0.35</td>
<td>@20230526-1</td>
</tr>
<tr>
<td><a href="https://civitai.com/models/67145">defacta5rd_5rd</a></td>
<td>@MidNijiV4V2</td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference(smoothAdd)</td>
<td>0.25</td>
<td>@20230526-2</td>
</tr>
<tr>
<td>@20230526-2</td>
<td><a href="https://civitai.com/models/67120">yuzu_v10</a></td>
<td></td>
<td>Weighted sum(cosineA)</td>
<td>0.45</td>
<td>@20230526-3</td>
</tr>
<tr>
<td>@20230526-1</td>
<td>@20230526-3</td>
<td></td>
<td>Weighted sum(cosineA)</td>
<td>0.35</td>
<td>@20230526-4</td>
</tr>
<tr>
<td><a href="https://civitai.com/models/10028">neverendingDreamNED_v122BakedVae</a></td>
<td><a href="https://huggingface.co/ashen-sensored/mzpikas_tmnd_enhanced">mzpikas_tmnd_enhanced-fp16</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference(smoothAdd)</td>
<td>0.25</td>
<td>@20230526-5</td>
</tr>
<tr>
<td>@20230526-4</td>
<td>@20230526-5</td>
<td></td>
<td>Weighted sum(cosineA)</td>
<td>0.25</td>
<td>@20230526-6</td>
</tr>
<tr>
<td>@20230526-6</td>
<td><a href="https://civitai.com/models/47067">pikasNewGeneration_v20</a></td>
<td></td>
<td>Weighted sum(cosineA)</td>
<td>0.25</td>
<td>@20230526-7</td>
</tr>
<tr>
<td><a href="https://huggingface.co/SweetLuna/Aurora">AuroraONE</a></td>
<td><a href="https://huggingface.co/gsdf/Counterfeit-V3.0">Counterfeit-V3.0_fp32</a></td>
<td></td>
<td>Weighted sum(cosineA)</td>
<td>0.45</td>
<td>@20230526-8</td>
</tr>
<tr>
<td>@20230526-7</td>
<td>@20230526-8</td>
<td></td>
<td>Weighted sum(cosineA)</td>
<td>0.5</td>
<td>@20230526-9</td>
</tr>
<tr>
<td>@20230526-9</td>
<td colspan="4">LoRAs(
<a href="https://civitai.com/models/81346">hohoaka</a>:-1.25,
<a href="https://civitai.com/models/81345">agomaru</a>:0.5,
<a href="https://civitai.com/models/81321">faceage</a>:0.5,
<a href="https://civitai.com/models/58764">FuturisticScapes-v2</a>:0.35,
<a href="https://huggingface.co/ashen-sensored/mzpikas_tmnd_enhanced">Silicon-landscape-isolation</a>:0.3,
<a href="https://civitai.com/models/81378">sanDka1</a>:-0.5)
</td>
<td>@20230526-LoRA</td>
</tr>
<tr>
<td>@20230526-LoRA</td>
<td><a href="https://huggingface.co/ploughB660/BalorMix-V4">BalorMix-V4.3</a></td>
<td></td>
<td>MBW</td>
<td>
1,1,0,0,0,0,0,0,1,1,0,0,<br/>
0,<br/>
1,1,0,0,0,0,0,0,0,0,1,1<br/>
Base alpha 1</td>
<td>blue_pencil-v9</td>
</tr>
</tbody>
</table>
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://civitai.com/models/22354/clearvae">ClearVAE</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/gsdf/Counterfeit-V3.0">EasyNegativeV2</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v9/1.webp"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
colorful girl, cute pose, cute clothes, cute room
Negative prompt: EasyNegativeV2
Steps: 30
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 3455183564
Size: 768x768
Denoising strength: 0.6
Hires upscale: 2
Hires steps: 30
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v9/2.webp"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
old man, suit, standing, moonlight
Negative prompt: EasyNegativeV2
Steps: 24
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 3212346863
Size: 768x768
Denoising strength: 0.6
Hires upscale: 2
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v9/3.webp"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
knight, from behind, town, gradient sky, dusk, orb, scenery, fantasy, mythology
Negative prompt: EasyNegativeV2
Steps: 20
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 2359276410
Size: 768x768
Denoising strength: 0.6
Hires upscale: 2
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v9/4.webp"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
mechanical dog, roar, fighting stance, apocalyptic, thunder
Negative prompt: EasyNegativeV2
Steps: 20
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 694621969
Size: 768x768
Denoising strength: 0.5
Hires upscale: 2
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v8" class="mt-0 text-2xl">
<code>blue_pencil-v8</code> <small>(<code>@20230507</code>)</small>
</h3>
<div>
ただ AuroraONE と Counterfeit-V3.0 が混ぜたかっただけです<br/>
v7の方がいい気がします。
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
CreativeML OpenRAIL-M ライセンス / CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 レシピ / Recipe</h4>
<div class="px-2">
<div class="border p-2">
<details>
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>Method</th>
<th>Weight</th>
<th>OUTPUT</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://huggingface.co/Korakoe/OpenNiji-V2">OpenNiji-V2</a></td>
<td><a href="https://huggingface.co/prompthero/openjourney-v4">openjourney-v4</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>1</td>
<td>@MidNijiV4V2</td>
</tr>
<tr>
<td><a href="https://huggingface.co/NoCrypt/SomethingV2">somethingv2_1</a></td>
<td><a href="https://civitai.com/models/59813/animal-human-hybrids">animalHumanHybrids_v10</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>0.25</td>
<td>@20230507-0</td>
</tr>
<tr>
<td>@20230507-0</td>
<td><a href="https://huggingface.co/Vsukiyaki/ShiratakiMix">ShiratakiMix-fixed</a></td>
<td></td>
<td>Weighted sum</td>
<td>0.35</td>
<td>@20230507-1</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Hemlok/MandarinMix">MandarinMix-EX</a></td>
<td><a href="https://huggingface.co/eimiss/EimisAnimeDiffusion_2.0v">EimisAnimeModel_2-0</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>0.15</td>
<td>@20230507-2</td>
</tr>
<tr>
<td>@20230507-2</td>
<td><a href="https://civitai.com/models/41916/koji">koji_v10</a></td>
<td></td>
<td>Weighted sum</td>
<td>0.45</td>
<td>@20230507-3</td>
</tr>
<tr>
<td>@20230507-1</td>
<td>@20230507-3</td>
<td></td>
<td>Weighted sum</td>
<td>0.3</td>
<td>@20230507-4</td>
</tr>
<tr>
<td><a href="https://huggingface.co/ashen-sensored/mzpikas_tmnd_enhanced">mzpikas_tmnd_enhanced-fp16</a></td>
<td>@MidNijiV4V2</td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>0.2</td>
<td>@20230507-5</td>
</tr>
<tr>
<td>@20230507-4</td>
<td>@20230507-5</td>
<td></td>
<td>Weighted sum</td>
<td>0.25</td>
<td>@20230507-6</td>
</tr>
<tr>
<td>@20230507-6</td>
<td>EmotionalBacklightsMix_alpha-fp16</td>
<td></td>
<td>Weighted sum</td>
<td>0.25</td>
<td>@20230507-7</td>
</tr>
<tr>
<td><a href="https://huggingface.co/SweetLuna/Aurora">AuroraONE</a></td>
<td><a href="https://huggingface.co/gsdf/Counterfeit-V3.0">Counterfeit-V3.0_fp32</a></td>
<td></td>
<td>Weighted sum</td>
<td>0.45</td>
<td>@20230507-8</td>
</tr>
<tr>
<td>@20230507-7</td>
<td>@20230507-8</td>
<td></td>
<td>Weighted sum</td>
<td>0.5</td>
<td>@20230507-9</td>
</tr>
<tr>
<td>@20230507-9</td>
<td><a href="https://huggingface.co/ploughB660/BalorMix-V4">BalorMix-V4.3</a></td>
<td></td>
<td>MBW</td>
<td>
1,1,0,0,0,0,0,0,1,1,0,0,<br/>
0,<br/>
1,1,0,0,0,0,0,0,0,0,1,1<br/>
Base alpha 1</td>
<td>blue_pencil-v8</td>
</tr>
</tbody>
</table>
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://civitai.com/models/22354/clearvae">ClearVAE</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/gsdf/Counterfeit-V3.0">EasyNegativeV2</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v8/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
wakame, girl, scenery
Negative prompt: EasyNegativeV2
Steps: 20
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 1217939694
Size: 768x768
Denoising strength: 0.6
SAG Guidance Scale: 0.75
SAG Mask Threshold: 1
Hires upscale: 2
Hires upscaler: Latent (nearest-exact)
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v8/2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
cats and girl, (colorful pop cute illustration:1.3), sd
Negative prompt: EasyNegativeV2, (girl:2.0)
Steps: 20
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 3694801134
Size: 768x768
Denoising strength: 0.5
SAG Guidance Scale: 0.75
SAG Mask Threshold: 1
Hires upscale: 2
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v7" class="mt-0 text-2xl">
<code>blue_pencil-v7</code> <small>(<code>@20230414</code>)</small>
</h3>
<div>
ちょっとアニメ寄りにしてみたモデルです。今までと勝手が違うかも?<br/>
ついでになんの影響なのか分からないけど背景の破綻が減った気がします。
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 レシピ / Recipe</h4>
<div class="px-2">
<div class="border p-2">
<details>
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>Method</th>
<th>Weight</th>
<th>OUTPUT</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://huggingface.co/Korakoe/OpenNiji-V2">OpenNiji-V2</a></td>
<td><a href="https://huggingface.co/prompthero/openjourney-v4">openjourney-v4</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>1</td>
<td>@MidNijiV4V2</td>
</tr>
<tr>
<td><a href="https://huggingface.co/NoCrypt/SomethingV2">somethingv2_1</a></td>
<td>wd-1-4-float32-booru-110k</td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>0.25</td>
<td>@20230414-0</td>
</tr>
<tr>
<td><a href="https://civitai.com/models/28828/breakdro">breakdro_A694</a></td>
<td><a href="https://civitai.com/models/32456/milk-mousse">MilkMousse_v10</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>0.25</td>
<td>@20230414-1</td>
</tr>
<tr>
<td>@20230414-0</td>
<td><a href="https://huggingface.co/Vsukiyaki/ShiratakiMix">ShiratakiMix-fixed</a></td>
<td></td>
<td>Weighted sum</td>
<td>0.5</td>
<td>@20230414-2</td>
</tr>
<tr>
<td>@20230414-1</td>
<td><a href="https://civitai.com/models/2583">hassakuHentaiModel_v11</a></td>
<td></td>
<td>Weighted sum</td>
<td>0.5</td>
<td>@20230414-3</td>
</tr>
<tr>
<td>@20230414-2</td>
<td>@20230414-3</td>
<td></td>
<td>Weighted sum</td>
<td>0.45</td>
<td>@20230414-4</td>
</tr>
<tr>
<td>@MidNijiV4V2</td>
<td><a href="https://huggingface.co/Deltaadams/HD-22">HD-22</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>0.25</td>
<td>@20230414-5</td>
</tr>
<tr>
<td>@20230414-4</td>
<td>@20230414-5</td>
<td></td>
<td>Weighted sum</td>
<td>0.2</td>
<td>@20230414-6</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Kuyoh/EmotionalBackLightsMix">EmotionalBacklightsMix_alpha-fp16</a></td>
<td><a href="https://huggingface.co/NegiInNattoMaki/Nabylon">Nabylon-v1.3</a></td>
<td></td>
<td>Weighted sum</td>
<td>0.4</td>
<td>@20230414-7</td>
</tr>
<tr>
<td>@20230414-6</td>
<td><a href="https://huggingface.co/Lucetepolis/TriPhaze">TriPhaze_B</a></td>
<td></td>
<td>Weighted sum</td>
<td>0.25</td>
<td>@20230414-8</td>
</tr>
<tr>
<td>@20230414-8</td>
<td>@20230414-7</td>
<td></td>
<td>Weighted sum</td>
<td>0.35</td>
<td>@20230414-9</td>
</tr>
<tr>
<td>@20230414-9</td>
<td><a href="https://huggingface.co/ploughB660/BalorMix-V4">BalorMix-V4.3</a></td>
<td></td>
<td>MBW</td>
<td>
1,1,0,0,0,0,0,0,1,1,0,0,<br/>
0,<br/>
1,1,0,0,0,0,0,0,0,0,1,1<br/>
Base alpha 1</td>
<td>blue_pencil-v7</td>
</tr>
</tbody>
</table>
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://civitai.com/models/22354/clearvae">ClearVAE</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/datasets/gsdf/EasyNegative">EasyNegative</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v7/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
anime, girl, OOTD, living room, afternoon
Negative prompt: EasyNegative
Steps: 30
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 58901123
Size: 768x768
Denoising strength: 0.5
Hires upscale: 2,
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v7/2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, mage, wizard, grasslands, sunset, scenery, fantasy
Negative prompt: EasyNegative, buildings, tower, statue
Steps: 35
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 2365232986
Size: 768x768
Denoising strength: 0.5
Hires upscale: 2
Hires steps: 20
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v6" class="mt-0 text-2xl">
<code>blue_pencil-v6</code> <small>(<code>@20230327</code>)</small>
</h3>
<div>
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
変更箇所を明示せずにこのモデルを使用したマージモデルを共有する</br>
<a href="https://huggingface.co/Xynon/SD-Silicon#terms-of-use">Clearly indicate where modifications have been made.</a>
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 レシピ / Recipe</h4>
<div class="px-2">
<div class="border p-2">
<details>
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>Method</th>
<th>Weight</th>
<th>OUTPUT</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://huggingface.co/Korakoe/OpenNiji-V2">OpenNiji-V2</a></td>
<td><a href="https://huggingface.co/prompthero/openjourney-v4">openjourney-v4</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>1</td>
<td>@MidNijiV4V2</td>
</tr>
<tr>
<td>wd-1-4-float32-booru-110k</td>
<td><a href="https://huggingface.co/haor/Evt_M">Evt_M</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>0.5</td>
<td>@20230327-0</td>
</tr>
<tr>
<td>@MidNijiV4V2</td>
<td><a href="https://huggingface.co/Deltaadams/HD-22">HD-22</a></td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>0.5</td>
<td>@20230327-1</td>
</tr>
<tr>
<td><a href="https://huggingface.co/NoCrypt/SomethingV2_2">SomethingV2_2</a></td>
<td>mechanicmixV2</td>
<td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned</a></td>
<td>Add difference</td>
<td>0.5</td>
<td>@20230327-2</td>
</tr>
<tr>
<td><a href="https://civitai.com/models/23804/defacta">defacta_6</a></td>
<td><a href="https://huggingface.co/NegiInNattoMaki/Nabylon">Nabylon-v1.3</a></td>
<td></td>
<td>Weighted sum</td>
<td>0.5</td>
<td>@20230327-3</td>
</tr>
<tr>
<td><a href="https://civitai.com/models/2583">hassakuv1</a></td>
<td>@20230327-0</td>
<td></td>
<td>Weighted sum</td>
<td>0.5</td>
<td>@20230327-4</td>
</tr>
<tr>
<td><a href="https://civitai.com/models/14712/orion-mix">orionMixVersion2</a></td>
<td>@20230327-1</td>
<td></td>
<td>Weighted sum</td>
<td>0.5</td>
<td>@20230327-5</td>
</tr>
<tr>
<td>@20230327-4</td>
<td>@20230327-5</td>
<td></td>
<td>Weighted sum</td>
<td>0.35</td>
<td>@20230327-6</td>
</tr>
<tr>
<td>@20230327-6</td>
<td>@20230327-2</td>
<td></td>
<td>Weighted sum</td>
<td>0.15</td>
<td>@20230327-7</td>
</tr>
<tr>
<td>@20230327-7</td>
<td><a href="https://huggingface.co/Lucetepolis/TriPhaze">TriPhaze_B</a></td>
<td></td>
<td>Weighted sum</td>
<td>0.25</td>
<td>@20230327-8</td>
</tr>
<tr>
<td>@20230327-8</td>
<td>@20230327-3</td>
<td></td>
<td>Weighted sum</td>
<td>0.2</td>
<td>@20230327-9</td>
</tr>
<tr>
<td>@20230327-9</td>
<td><a href="https://huggingface.co/ploughB660/BalorMix-V4">BalorMix-V4.1</a></td>
<td></td>
<td>MBW</td>
<td>
1,1,0,0,0,0,0,0,1,1,0,0,<br/>
0,<br/>
1,1,0,0,0,0,0,0,0,0,1,1<br/>
Base alpha 1</td>
<td>blue_pencil-v6</td>
</tr>
</tbody>
</table>
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://civitai.com/models/22354/clearvae">ClearVAE</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/datasets/gsdf/EasyNegative">EasyNegative</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v6/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girls, school, sunset, countryside, (silhouette:1.3)
Negative prompt: EasyNegative
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 9717043
Size: 768x768
Denoising strength: 0.7
Hires upscale: 2,
Hires steps: 20
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v6/2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
boy and girl, summer clothes, lakeside, landscape
Negative prompt: EasyNegative
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 676740370
Size: 768x768
Denoising strength: 0.65
Hires upscale: 2
Hires steps: 25
</pre>
</div>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v5" class="mt-0 text-2xl">
<code>blue_pencil-v5b</code> <small>(<code>@20230314</code>)</small> / <code>blue_pencil-v5</code> <small>(<code>@20230310</code>)</small>
</h3>
<div>
openjourney-v2 と OpenNiji-V2 を多めに配合して表現力が高まってくれるといいなと願いながら作ったモデルです。<br />
実際効果があるかはわかりません。<br />
blue_pencil-v5b は<a href="https://huggingface.co/ploughB660/Balor-V3">Balor-V2.5</a>の代わりに<a href="https://huggingface.co/ploughB660/BalorMix-V4">BalorMix-V4.1</a>を階層マージしたモデルです。
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 マージ元モデル / Merged models</h4>
<div class="px-2">
<div class="border p-2">
<details>
* [wd-1-4-float32-booru-110k](https://huggingface.co/hakurei/waifu-diffusion-v1-4) / CreativeML OpenRAIL M
* [Evt_M](https://huggingface.co/haor/Evt_M) / CreativeML OpenRAIL M
* [Evt_V4](https://huggingface.co/haor/Evt_V4-preview) / CreativeML OpenRAIL M
* [ACertainty](https://huggingface.co/JosephusCheung/ACertainty) / CreativeML OpenRAIL M
* [HD-22](https://huggingface.co/Deltaadams/HD-22) / BigScience OpenRAIL M
* [openjourney-v2](https://huggingface.co/prompthero/openjourney-v2) / CreativeML OpenRAIL M
* [OpenNiji-V2](https://huggingface.co/Korakoe/OpenNiji-V2) / CreativeML OpenRAIL M
* [SomethingV2_1](https://huggingface.co/NoCrypt/SomethingV2) / CreativeML OpenRAIL M
* [mechanicmix_V2](https://civitai.com/models/14880/mechanicmixv2) / CreativeML OpenRAIL M
* [HighRiseMixV2](https://huggingface.co/0RisingStar0/HighRiseMixV2) / CreativeML OpenRAIL M
* [AbyssOrangeMix2_NSFW](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [AnythingV3.0](https://huggingface.co/Linaqruf/anything-v3.0) / CreativeML OpenRAIL M
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* NovelAI / ???
* Gape60 / ???
* [AnythingV4.5](https://huggingface.co/andite/anything-v4.0) / CreativeML OpenRAIL M
* [AikimiXPv1.0](https://huggingface.co/Aikimi/AikimiX) / unknown
* [AikimixCv1.5](https://huggingface.co/Aikimi/AikimiX) / unknown
* [basil_mix_fixed](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* [Counterfeit-V2.0](https://huggingface.co/gsdf/Counterfeit-V2.0) / CreativeML OpenRAIL M
* [Counterfeit-V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5) / CreativeML OpenRAIL M
* [EerieOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [Instagram](https://huggingface.co/cafeai/cafe-instagram-sd-1-5-v6) / AGPL-3.0
* F222 / ???
* [Elysium_V1](https://huggingface.co/hesw23168/SD-Elysium-Model) / OpenRAIL M
* NovelAI / ???
* Gape60 / ???
* [pastelmix-better-vae](https://huggingface.co/andite/pastel-mix) / Modified CreativeML OpenRAIL M
* [powercolor.v2](https://civitai.com/models/6167/powercolorv2) / CreativeML OpenRAIL M
* [Nabylon-v1.3](https://huggingface.co/NegiInNattoMaki/Nabylon) / CreativeML OpenRAIL M
* [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [LonganMix](https://huggingface.co/Hemlok/LonganMix) / other
* [7th_Layer](https://huggingface.co/syaimu/7th_Layer) / ???
* and others
* [grapefruitv4](https://civitai.com/models/2583) / Modified CreativeML OpenRAIL M
* [AnythingV3.0](https://huggingface.co/Linaqruf/anything-v3.0) / CreativeML OpenRAIL M
* [Elysium_V2](https://huggingface.co/hesw23168/SD-Elysium-Model) / OpenRAIL M
* [AbyssOrangeMix](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [Instagram](https://huggingface.co/cafeai/cafe-instagram-sd-1-5-v6) / AGPL-3.0
* F222 / ???
* [AnythingV3.0](https://huggingface.co/Linaqruf/anything-v3.0) / CreativeML OpenRAIL M
* NovelAI / ???
* Gape60 / ???
* [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* and LoRAs
* ❌ Run on services that generate images for money
* ❌ Sell this model or merges using this model
* [Orion-Mix](https://civitai.com/models/14712/orion-mix) / CreativeML OpenRAIL M
* [Cetus-Mix](https://civitai.com/models/6755/cetus-mix) / CreativeML OpenRAIL M
* [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [pastel-mix](https://huggingface.co/andite/pastel-mix) / Modified CreativeML OpenRAIL M
* [Andromeda-Mix](https://civitai.com/models/6408/andromeda-mix) / CreativeML OpenRAIL M
* [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [pastel-mix](https://huggingface.co/andite/pastel-mix) / Modified CreativeML OpenRAIL M
* [TriPhase_B](https://huggingface.co/Lucetepolis/TriPhaze) / CreativeML OpenRAIL M
* [ultracolor.v4](https://civitai.com/models/5777/ultracolorv4) / CreativeML OpenRAIL M
* [Counterfeit-V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5) / CreativeML OpenRAIL M
* [Treebark](https://huggingface.co/HIZ/aichan_pick) / ???
* [AnythingV3.0](https://huggingface.co/Linaqruf/anything-v3.0) / CreativeML OpenRAIL M
* NovelAI / ???
* Gape60 / ???
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* [SXD](https://civitai.com/models/1169/sxd) / CreativeML OpenRAIL M
* [Balor-V2.5](https://huggingface.co/ploughB660/Balor-V3) / CreativeML OpenRAIL M
* [Elysium_V2](https://huggingface.co/hesw23168/SD-Elysium-Model) / OpenRAIL M
* [ultracolor.v4](https://civitai.com/models/5777/ultracolorv4) / CreativeML OpenRAIL M
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* [ACertainThing](https://huggingface.co/JosephusCheung/ACertainThing) / CreativeML OpenRAIL M
* [BalorMix-V4.1](https://huggingface.co/ploughB660/BalorMix-V4) / CreativeML OpenRAIL M
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://huggingface.co/stabilityai/sd-vae-ft-mse-original">vae-ft-mse-840000-ema-pruned</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/datasets/gsdf/EasyNegative">EasyNegative</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-2" style="width: 100%;">blue_pencil-v5b</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v5b/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
cute girl, garden, sunset
Negative prompt: EasyNegative
Steps: 20
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 3384784851
Size: 768x768
Denoising strength: 0.6
Hires upscale: 2
Hires upscaler: Latent (nearest-exact)
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v5b/2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
(robot:1.1), abandoned city, scenery
Negative prompt: EasyNegative
Steps: 20
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 1038840067
Size: 768x768
Denoising strength: 0.5
Hires upscale: 2
Hires upscaler: SwinIR_4x
</pre>
</div>
</div>
<div class="p-1 flex-2" style="width: 100%;">blue_pencil-v5</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v5/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
animal girl, jungle, scenery
Negative prompt: EasyNegative, animal tail
Steps: 30
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 1859865872
Size: 768x768
Denoising strength: 0.55
Hires upscale: 2
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v5/2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
robotic girl, laboratory, indoors
Negative prompt: EasyNegative
Steps: 30
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 3600640974
Size: 768x768
Denoising strength: 0.6
Hires upscale: 2
</pre>
</div>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v4" class="mt-0 text-2xl">
<code>blue_pencil-v4</code> <small>(<code>@20230227</code>)</small>
</h3>
<div>
混ぜ方を微調整したモデルです。
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 マージ元モデル / Merged models</h4>
<div class="px-2">
<div class="border p-2">
<details>
* [Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion-v1-4) / CreativeML OpenRAIL M
* [Evt_M](https://huggingface.co/haor/Evt_M) / CreativeML OpenRAIL M
* [Evt_V4](https://huggingface.co/haor/Evt_V4-preview) / CreativeML OpenRAIL M
* [ACertainty](https://huggingface.co/JosephusCheung/ACertainty) / CreativeML OpenRAIL M
* [Hentai Diffusion](https://www.cognitionai.org/hdhowtogetstarted) / BigScience OpenRAIL M
* [Elysium_Anime_V3](https://huggingface.co/hesw23168/SD-Elysium-Model) / Openrail M
* [AniReal](https://huggingface.co/Hosioka/AniReal) / CreativeML OpenRAIL M
* [WhiteSpace Prism](https://civitai.com/models/12933/whitespace-prism) / CreativeML OpenRAIL M
* [pastel-mix](https://huggingface.co/andite/pastel-mix) / Modified CreativeML OpenRAIL M
* [dpepmkmp](https://huggingface.co/closertodeath/dpepmkmp) / CreativeML OpenRAIL M
* [Tea](https://huggingface.co/andite/desserts) / CreativeML OpenRAIL M
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* LoRAs
* Magic LORA / ???
* Jordan_3 / ???
* [sttabi_v1.4-04](https://huggingface.co/dolphinz/stlora) / CC BY-NC 4.0
* [xlimo768](https://huggingface.co/closertodeath/ctdlora) / CC BY-NC 2.0
* [dpep 2 768](https://huggingface.co/closertodeath/ctdlora) / CC BY-NC 2.0
* ❌ Run on services that generate images for money
* ❌ Sell this model or merges using this model
* [HighRiseMixV2](https://huggingface.co/0RisingStar0/HighRiseMixV2) / CreativeML OpenRAIL M
* [AbyssOrangeMix2_NSFW](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [AnythingV3.0](https://huggingface.co/Linaqruf/anything-v3.0) / CreativeML OpenRAIL M
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* NovelAI / ???
* Gape60 / ???
* [AnythingV4.5](https://huggingface.co/andite/anything-v4.0) / CreativeML OpenRAIL M
* [AikimiXPv1.0](https://huggingface.co/Aikimi/AikimiX) / unknown
* [AikimixCv1.5](https://huggingface.co/Aikimi/AikimiX) / unknown
* [basil_mix_fixed](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* [Counterfeit-V2.0](https://huggingface.co/gsdf/Counterfeit-V2.0) / CreativeML OpenRAIL M
* [Counterfeit-V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5) / CreativeML OpenRAIL M
* [EerieOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [Instagram](https://huggingface.co/cafeai/cafe-instagram-sd-1-5-v6) / AGPL-3.0
* F222 / ???
* [Elysium_V1](https://huggingface.co/hesw23168/SD-Elysium-Model) / OpenRAIL M
* NovelAI / ???
* Gape60 / ???
* [pastelmix-better-vae](https://huggingface.co/andite/pastel-mix) / Modified CreativeML OpenRAIL M
* [powercolor.v2](https://civitai.com/models/6167/powercolorv2) / CreativeML OpenRAIL M
* [Nabylon-v1.3](https://huggingface.co/NegiInNattoMaki/Nabylon) / CreativeML OpenRAIL M
* [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [LonganMix](https://huggingface.co/Hemlok/LonganMix) / other
* [7th_Layer](https://huggingface.co/syaimu/7th_Layer) / ???
* and others
* [grapefruitv4](https://civitai.com/models/2583) / Modified CreativeML OpenRAIL M
* [AnythingV3.0](https://huggingface.co/Linaqruf/anything-v3.0) / CreativeML OpenRAIL M
* [Elysium_V2](https://huggingface.co/hesw23168/SD-Elysium-Model) / OpenRAIL M
* [AbyssOrangeMix](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [Instagram](https://huggingface.co/cafeai/cafe-instagram-sd-1-5-v6) / AGPL-3.0
* F222 / ???
* [AnythingV3.0](https://huggingface.co/Linaqruf/anything-v3.0) / CreativeML OpenRAIL M
* NovelAI / ???
* Gape60 / ???
* [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) / CreativeML OpenRAIL M
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* and LoRAs
* ❌ Run on services that generate images for money
* ❌ Sell this model or merges using this model
* [VaLJ-fp32](https://huggingface.co/Hemlok/VaLMix) / CreativeML OpenRAIL M
* [pastel-mix](https://huggingface.co/andite/pastel-mix) / Modified CreativeML OpenRAIL M
* [ACertainThing](https://huggingface.co/JosephusCheung/ACertainThing) / CreativeML OpenRAIL M
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* [Counterfeit-V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5) / CreativeML OpenRAIL M
* [openjourney](https://huggingface.co/prompthero/openjourney) / CreativeML OpenRAIL M
* [TriPhase_B](https://huggingface.co/Lucetepolis/TriPhaze) / CreativeML OpenRAIL M
* [ultracolor.v4](https://civitai.com/models/5777/ultracolorv4) / CreativeML OpenRAIL M
* [Counterfeit-V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5) / CreativeML OpenRAIL M
* [Treebark](https://huggingface.co/HIZ/aichan_pick) / ???
* [AnythingV3.0](https://huggingface.co/Linaqruf/anything-v3.0) / CreativeML OpenRAIL M
* NovelAI / ???
* Gape60 / ???
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* [SXD](https://civitai.com/models/1169/sxd) / CreativeML OpenRAIL M
* [Balor-V2.5](https://huggingface.co/ploughB660/Balor-V3) / CreativeML OpenRAIL M
* [Elysium_V2](https://huggingface.co/hesw23168/SD-Elysium-Model) / OpenRAIL M
* [ultracolor.v4](https://civitai.com/models/5777/ultracolorv4) / CreativeML OpenRAIL M
* [basil_mix](https://huggingface.co/nuigurumi/basil_mix) / OpenRAIL M
* [ACertainThing](https://huggingface.co/JosephusCheung/ACertainThing) / CreativeML OpenRAIL M
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://huggingface.co/stabilityai/sd-vae-ft-mse-original">vae-ft-mse-840000-ema-pruned</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/datasets/gsdf/EasyNegative">EasyNegative</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v4/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, kyoto, scenery
Negative prompt: EasyNegative
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 2236433764
Size: 768x768
Denoising strength: 0.65
Hires upscale: 2
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v4/2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, wasteland, scenery
Negative prompt: EasyNegative
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 252972019
Size: 768x768
Denoising strength: 0.6
Hires upscale: 2
</pre>
</div>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v3" class="mt-0 text-2xl">
<code>blue_pencil-v3</code> <small>(<code>@20230223</code>)</small>
</h3>
<div>
blue_pencil-v2 を微妙に変更したモデルです。<br />
雰囲気は変わってませんが若干構図が変化すると思います。
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base text-bold" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 マージ元モデル / Merged models</h4>
<div class="px-2">
<div class="border p-2">
<details>
* [HighRiseMixV2](https://huggingface.co/0RisingStar0/HighRiseMixV2)
* AbyssOrangeMix2_NSFW
* AnythingV4.5
* AikimiXPv1.0
* AikimixCv1.5
* BasilMixFixed
* Counterfeit V2.0
* CounterfeitV2.5
* EerieOrangeMix2
* pastelmix-better-vae
* PowercolorV2
* [Evt_M](https://huggingface.co/haor/Evt_M)
* Evt_V4
* ACertainty
* [GrapefuitV4](https://huggingface.co/iZELX1/Grapefruit)
* AnythingV3
* ElysiumV2
* AbyssOrangeMix
* AbyssOrangeMix2
* basilMix
* [Elysium_Anime_V3](https://huggingface.co/hesw23168/SD-Elysium-Model)
* [VaLJMix](https://huggingface.co/Hemlok/VaLMix)
* pastel-mix
* ACertainThing
* basil_mix
* Counterfeit-V2.5
* openjourney
* [HD-22](https://www.cognitionai.org/hdhowtogetstarted)
* [7th_anime_v3_testA](https://huggingface.co/syaimu/7th_test)
* [AniReal](https://huggingface.co/Hosioka/AniReal)
* [atwcustom_V4](https://huggingface.co/atsuwo/ATW-custom)
* [Nabylon-v1.2](https://huggingface.co/NegiInNattoMaki/Nabylon-v1.0)
* AbyssOrangeMix2
* LonganMix
* and more
* [TriPhaze_B](https://huggingface.co/Lucetepolis/TriPhaze)
* ultracolor.v4
* Counterfeit-V2.5
* Treebark
* [Balor-V2.5](https://huggingface.co/ploughB660/Balor-V3)
* Elysium_anime_V
* Ultracolor4
* basil mix
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://huggingface.co/stabilityai/sd-vae-ft-mse-original">vae-ft-mse-840000-ema-pruned</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/datasets/gsdf/EasyNegative">EasyNegative</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v3/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, tokyo, scenery
Negative prompt: EasyNegative
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 1874281989
Size: 768x768
Denoising strength: 0.65
Hires upscale: 2
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v3/2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, bed room, cute clothes
Negative prompt: EasyNegative
Steps: 50
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 2666530891
Size: 768x768
Denoising strength: 0.6
Hires upscale: 2
</pre>
</div>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v2" class="mt-0 text-2xl">
<code>blue_pencil-v2b</code> <small>(<code>@20230219</code>)</small> / <code>blue_pencil-v2</code> <small>(<code>@20230217</code>)</small>
</h3>
<div>
<a href="https://huggingface.co/WarriorMama777/OrangeMixs">AbyssOrangeMix3A1</a>をベースに配合しなおしたモデルです。<br />
blue_pencil-v2b は Balor-V2 の代わりに Balor-V3 を階層マージしたモデルです。
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base text-bold" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 マージ元モデル / Merged models</h4>
<div class="px-2">
<div class="border p-2">
<details>
* [AbyssOrangeMix3A1](https://huggingface.co/WarriorMama777/OrangeMixs)
* AnythingV3.0
* ChilloutMix
* GAPE60
* Counterfeit2.5
* Kenshi
* [Evt_M](https://huggingface.co/haor/Evt_M)
* Evt_V4
* ACertainty
* [GingerMixR](https://huggingface.co/Hemlok/GingerMix)
* LimeMixV2
* [Elysium_Anime_V3](https://huggingface.co/hesw23168/SD-Elysium-Model)
* [VaLJMix](https://huggingface.co/Hemlok/VaLMix)
* pastel-mix
* ACertainThing
* basil_mix
* Counterfeit-V2.5
* openjourney
* [HD-22](https://www.cognitionai.org/hdhowtogetstarted)
* [7th_anime_v3_testA](https://huggingface.co/syaimu/7th_test)
* [AniReal](https://huggingface.co/Hosioka/AniReal)
* [atwcustom_V4](https://huggingface.co/atsuwo/ATW-custom)
* [Nabylon-v1.2](https://huggingface.co/NegiInNattoMaki/Nabylon-v1.0)
* AbyssOrangeMix2
* LonganMix
* and more
* [TriPhaze_B](https://huggingface.co/Lucetepolis/TriPhaze)
* ultracolor.v4
* Counterfeit-V2.5
* Treebark
* [Balor-V2](https://huggingface.co/ploughB660/Balor-V2)
* [Balor-V3](https://huggingface.co/ploughB660/Balor-V3)
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://huggingface.co/stabilityai/sd-vae-ft-mse-original">vae-ft-mse-840000-ema-pruned</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/datasets/gsdf/EasyNegative">EasyNegative</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-2" style="width: 100%;">blue_pencil-v2b</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v2b/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, berlin, scenery
Negative prompt: EasyNegative
Steps: 30
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 3164975857
Size: 768x768
Denoising strength: 0.65
Hires upscale: 2
</pre>
</div>
</div>
<div class="p-1 flex-2" style="width: 100%;">blue_pencil-v2</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v2/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, tokyo, scenery
Negative prompt: EasyNegative
Steps: 30
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 205537258
Size: 768x768
Denoising strength: 0.65
Hires upscale: 2
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v2/2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, spacesuit, beautiful earth, scenery, on the moon
Negative prompt: EasyNegative
Steps: 50
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 1069444343
Size: 960x640
Denoising strength: 0.6
Hires upscale: 2
</pre>
</div>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="blue_pencil-v1" class="mt-0 text-2xl">
<code>blue_pencil-v1b</code> <small>(<code>@20230212</code>)</small> / <code>blue_pencil-v1</code> <small>(<code>@20230211</code>)</small>
</h3>
<div>
blue_pencil-v1b は<a href="https://civitai.com/models/4758/amalgammix">Amalgam_Mix</a>の代わりに<a href="https://huggingface.co/ploughB660/Balor-V2">Balor-V2</a>を階層マージしたモデルです
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base text-bold" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>🌱 マージ元モデル / Merged models</h4>
<div class="px-2">
<div class="border p-2">
<details>
* [Defmix-v1.1](https://huggingface.co/Defpoint/Defmix-v1.0)
* Counterfeit v1.0
* Counterfeit v2.0
* Basil Mix
* Anything v4.0
* [PastelRainier](https://huggingface.co/Hemlok/RainierMix)
* ACertainThing
* Anything-V4.5
* Counterfeit-V2.0
* Evt_V4-preview
* basil_mix
* pastel-mix
* [GingerMixR](https://huggingface.co/Hemlok/GingerMix)
* LimeMixV2
* [Elysium_Anime_V3](https://huggingface.co/hesw23168/SD-Elysium-Model)
* [SukiyakiMix-v1.0](https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0)
* pastel-mix
* AbyssOrangeMix2
* [HD-20](https://www.cognitionai.org/hdhowtogetstarted)
* [7th_anime_v3_testA](https://huggingface.co/syaimu/7th_test)
* [AniReal](https://huggingface.co/Hosioka/AniReal)
* [TriPhaze_B](https://huggingface.co/Lucetepolis/TriPhaze)
* ultracolor.v4
* Counterfeit-V2.5
* Treebark
* [Nabylon-v1.2](https://huggingface.co/NegiInNattoMaki/Nabylon-v1.0)
* AbyssOrangeMix2
* LonganMix
* and more
* [atwcustom_V4](https://huggingface.co/atsuwo/ATW-custom)
* [Amalgam_Mix](https://civitai.com/models/4758/amalgammix)
* [Balor-V2](https://huggingface.co/ploughB660/Balor-V2)
</details>
</div>
</div>
<h4>🔧 推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://huggingface.co/stabilityai/sd-vae-ft-mse-original">vae-ft-mse-840000-ema-pruned</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/datasets/gsdf/EasyNegative">EasyNegative</a>
</td>
</tr>
</tbody>
</table>
</div>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-2" style="width: 100%;">blue_pencil-v1b</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v1b/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, tokyo, scenery
Negative prompt: EasyNegative
Steps: 30
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 205537258
Size: 768x768
Denoising strength: 0.65
Hires upscale: 2
</pre>
</div>
</div>
<div class="p-1 flex-2" style="width: 100%;">blue_pencil-v1</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v1/1-2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, tokyo, scenery
Negative prompt: EasyNegative
Steps: 30
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 2526423076
Size: 768x768
Denoising strength: 0.6
Hires upscale: 2
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/bluepen5805/blue_pencil/resolve/main/images/blue_pencil-v1/2-2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
girl, early teen, kimono, sakura, particles
Negative prompt: EasyNegative
Steps: 20
Sampler: DPM++ SDE Karras
CFG scale: 7.5
Seed: 4036639388,
Size: 512x768
Denoising strength: 0.62
Hires upscale: 2
</pre>
</div>
</div>
</div>
</div>
</div>
|
bwilkie/distilhubert-finetuned-gtzan-finetuned-gtzan | bwilkie | 2023-07-26T10:19:48Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:NicolasDenier/distilhubert-finetuned-gtzan",
"base_model:finetune:NicolasDenier/distilhubert-finetuned-gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-25T14:19:34Z | ---
license: apache-2.0
base_model: NicolasDenier/distilhubert-finetuned-gtzan
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: bwilkie-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bwilkie-finetuned-gtzan
This model is a fine-tuned version of [NicolasDenier/distilhubert-finetuned-gtzan](https://huggingface.co/NicolasDenier/distilhubert-finetuned-gtzan) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3956
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2299 | 1.0 | 56 | 0.4532 | 0.86 |
| 0.1206 | 1.99 | 112 | 0.3956 | 0.89 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
MagicLEMP/LexiBurger | MagicLEMP | 2023-07-26T10:19:36Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-07-26T10:18:46Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# rlasseri/burger_jurilibre_v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("rlasseri/burger_jurilibre_v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
nikbhi/q-FrozenLake-v1-4x4-noSlippery_v1 | nikbhi | 2023-07-26T10:14:29Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-26T10:14:27Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nikbhi/q-FrozenLake-v1-4x4-noSlippery_v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Beams24/lalu | Beams24 | 2023-07-26T10:06:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-26T10:01:44Z | ---
license: creativeml-openrail-m
---
|
xiaoming-leza/wav2vec2-common_voice-tr-demo | xiaoming-leza | 2023-07-26T09:52:12Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-26T03:39:06Z | ---
language:
- tr
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-common_voice-tr-demo
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: COMMON_VOICE - TR
type: common_voice
config: tr
split: test
args: 'Config: tr, Training split: train+validation, Eval split: test'
metrics:
- name: Wer
type: wer
value: 0.3428658972525789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3709
- Wer: 0.3429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.92 | 100 | 3.5988 | 1.0 |
| No log | 1.83 | 200 | 3.0083 | 0.9999 |
| No log | 2.75 | 300 | 0.8642 | 0.7579 |
| No log | 3.67 | 400 | 0.5713 | 0.6203 |
| 3.14 | 4.59 | 500 | 0.4795 | 0.5338 |
| 3.14 | 5.5 | 600 | 0.4441 | 0.4912 |
| 3.14 | 6.42 | 700 | 0.4241 | 0.4521 |
| 3.14 | 7.34 | 800 | 0.4326 | 0.4611 |
| 3.14 | 8.26 | 900 | 0.3913 | 0.4212 |
| 0.2183 | 9.17 | 1000 | 0.4036 | 0.3973 |
| 0.2183 | 10.09 | 1100 | 0.4035 | 0.3959 |
| 0.2183 | 11.01 | 1200 | 0.3807 | 0.3790 |
| 0.2183 | 11.93 | 1300 | 0.3750 | 0.3650 |
| 0.2183 | 12.84 | 1400 | 0.3822 | 0.3573 |
| 0.1011 | 13.76 | 1500 | 0.3747 | 0.3510 |
| 0.1011 | 14.68 | 1600 | 0.3714 | 0.3454 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
MMG/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac | MMG | 2023-07-26T09:39:16Z | 178 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"es",
"dataset:sqac",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- sqac
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac-v2
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac-v2
This model is a fine-tuned version of [mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es](https://huggingface.co/mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es) on the sqac dataset.
It achieves the following results on the evaluation set:
- {'exact_match': 65.02145922746782, 'f1': 81.6651482773275}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9417 | 1.0 | 1277 | 0.7903 |
| 0.5002 | 2.0 | 2554 | 0.8459 |
| 0.2895 | 3.0 | 3831 | 0.9482 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
RicardoLee/Llama2-base-7B-Chinese-50W-LoRA | RicardoLee | 2023-07-26T09:36:15Z | 7 | 9 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"llama2-base",
"llama2-base-7B",
"zh",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-25T14:30:39Z | ---
language:
- zh
- en
tags:
- llama2
- llama2-base
- llama2-base-7B
---
# 7B Chinese Chatbot trained based on LLama2-base 7B (Pure LoRA Training)
## Introduction
在完成了[Llama2-chat 7B Chinese](https://huggingface.co/RicardoLee/Llama2-chat-Chinese-50W) 和 [Llama2-chat 13B Chinese](https://huggingface.co/RicardoLee/Llama2-chat-13B-Chinese-50W) 的训练后,我非常好奇能否直接基于Llama2-base 系列直接进行SFT训练。这也是本模型仓库的初衷。
终于,在[RicardoLee/Llama2-base-7B-Chinese-50W-pre\_release](https://huggingface.co/RicardoLee/Llama2-base-7B-Chinese-50W-pre_release),[RicardoLee/Llama2-base-7B-Chinese-50W-Full2LoRA](https://huggingface.co/RicardoLee/Llama2-base-7B-Chinese-50W-Full2LoRA) 之后,我成功探索出了能稳定训练LoRA的参数,并最终完成了50W 数据的LoRA 训练。
训练数据使用[BELLE](https://huggingface.co/BelleGroup)项目中采样的50万SFT数据进行SFT训练。
After finishing the training of [Llama2-chat 7B Chinese](https://huggingface.co/RicardoLee/Llama2-chat-Chinese-50W) and [Llama2-chat 13B Chinese](https://huggingface.co/RicardoLee/Llama2-chat-13B-Chinese-50W), I am deeply intrigued by the possibility of conducting SFT (Style-Fine-Tuning) training directly based on the Llama2-base series. This is the fundamental purpose of this model repository.
Finally, after [RicardoLee/Llama2-base-7B-Chinese-50W-pre\_release](https://huggingface.co/RicardoLee/Llama2-base-7B-Chinese-50W-pre_release),[RicardoLee/Llama2-base-7B-Chinese-50W-Full2LoRA](https://huggingface.co/RicardoLee/Llama2-base-7B-Chinese-50W-Full2LoRA), I did find the right hyperparams to do the LoRA training stabelly based on Llama2-base 7B model. For more details please refer to the Train Detail section.
The training data is sampled from [BELLE](https://huggingface.co/BelleGroup) project, which consists of 500,000 SFT samples.
## Train Detail
一些训练上的细节:
1. 训练框架:该模型使用了修改过的[Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)项目进行训练。
2. Tokenizer:该模型使用了Chinese-Alpaca-Plus模型的tokenizer.model。这是因为LLama2本身的tokenizer.model同LLama1是一摸一样的。因此理论上可以完全复用Chinese-LLaMa项目的tokenizer而不会产生如何错位问题。
3. 训练参数:**该模型训练使用的超参数为:LoRA rank: 64, LR: 4e-4, Warmup ratio: 0.001.**
4. 训练资源:8卡V100。21小时
5. 训练起始的loss:9.1402
6. 训练终止的loss:1.4104
Some details in training:
1. Trianing Framework: This model is trained on modified [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) Framework.
2. Tokenizer: This model utilizes the tokenizer.model from the Chinese-Alpaca-Plus model. The reason for this choice is that the tokenizer.model in LLama2 is identical to the one used in LLama1. As a result, it is theoretically feasible to entirely reuse the tokenizer from the Chinese-LLaMa project without encountering any issues related to token misalignment.
3. Training Parameters: **The hyperparams are: LoRA rank: 64, LR: 4e-4, Warmup ratio: 0.001.**
4. Training Resource: 8\*V100, 21 hours.
5. Initial Loss: 9.1402
6. Train Loss: 1.4104
## Inference
该模型依然采用stanford alpaca 模版。因此在测试时且别忘记添加开场白。开场白如下:
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n\n${Your Content}\n\n### Response:\n\n"
对于带上文的对话,开场白如下:
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n\nHuman:${Previous Human Content}\nAssistant:${Previous Assistance Content}\nHuman:${Your Question}\n\n### Response:\n\n"
This model still using the Stanford Alpaca template. Therefore, don't forget to add prologue template. The prologue template is:
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n\n${Your Content}\n\n### Response:\n\n"
For dialogue with context, the prelogue template is:
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n\nHuman:${Previous Human Content}\nAssistant:${Previous Machine Content}\nHuman:${Your Question}\n\n### Response:\n\n"
## Licence
本仓库的模型依照 Apache-2.0 协议开源,模型的权重的使用则需要遵循LLama2[MODEL LICENCE](LICENSE)。
This repository's models are open-sourced under the Apache-2.0 license, and their weight usage must adhere to LLama2 [MODEL LICENCE](LICENSE) license.
## Future Work
将会在近期逐步放出
1. 更大SFT数据规模训练下的模型。
2. 13B及以下的LLama2 同LLama2-chat的模型,以供大家对比。
I will release the following models:
1. Models trained on larger data scale.
2. Models trained on LLama2 and LLama2-chat (under the 13B, since I only have V100), for comparison.
|
NEO946B/PPO-Huggy | NEO946B | 2023-07-26T09:36:15Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-26T09:35:45Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: NEO946B/PPO-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
molinari135/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan | molinari135 | 2023-07-26T09:34:03Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-26T09:16:47Z | ---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5243
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7406 | 1.0 | 56 | 1.0012 | 0.66 |
| 0.3306 | 1.99 | 112 | 0.4705 | 0.83 |
| 0.2461 | 2.99 | 168 | 0.5012 | 0.83 |
| 0.0756 | 4.0 | 225 | 0.5697 | 0.84 |
| 0.1149 | 5.0 | 281 | 0.5627 | 0.87 |
| 0.0012 | 5.99 | 337 | 0.6342 | 0.84 |
| 0.0007 | 6.99 | 393 | 0.4624 | 0.89 |
| 0.0005 | 8.0 | 450 | 0.6121 | 0.87 |
| 0.0275 | 9.0 | 506 | 0.5096 | 0.87 |
| 0.0003 | 9.96 | 560 | 0.5243 | 0.87 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
spike-spiegel/dqn-SpaceInvadersNoFrameskip-v4 | spike-spiegel | 2023-07-26T09:32:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-26T09:32:10Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 535.00 +/- 136.55
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga spike-spiegel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga spike-spiegel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga spike-spiegel
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
justinhoang/a2c-PandaReachDense-v2 | justinhoang | 2023-07-26T09:04:24Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-26T09:02:36Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.50 +/- 0.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TheBloke/Llama-2-7B-Chat-fp16 | TheBloke | 2023-07-26T08:27:22Z | 3,241 | 31 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-26T08:21:50Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
arxiv: 2307.09288
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
goodakdali/qLoRA_50Step | goodakdali | 2023-07-26T08:19:09Z | 3 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T08:19:08Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
adi0308/teeth-color-from-image | adi0308 | 2023-07-26T08:18:48Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-26T08:18:11Z | ---
license: creativeml-openrail-m
--- |
cnenkidu/xlm-roberta-base-finetuned-panx-de | cnenkidu | 2023-07-26T08:17:01Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-26T08:07:44Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8616659101225601
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1329
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2568 | 1.0 | 525 | 0.1583 | 0.8125 |
| 0.1261 | 2.0 | 1050 | 0.1458 | 0.8473 |
| 0.0823 | 3.0 | 1575 | 0.1329 | 0.8617 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
jamesdborin/ct2-int8-llama-2-7b-chat | jamesdborin | 2023-07-26T08:14:57Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-25T19:52:26Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
s3nh/kw-cutegpt-13b-ift-GGML | s3nh | 2023-07-26T08:00:45Z | 0 | 0 | null | [
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] | text-generation | 2023-07-26T07:46:38Z | ---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/XuYipei/kw-cutegpt-13b-ift).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
```python
from transformers import LlamaForCausalLM, LlamaTokenizer
import torch
def generate_prompt(query, history, input=None):
prompt = ""
for i, (old_query, response) in enumerate(history):
prompt += "{}{}\n<end>".format(old_query, response)
prompt += "{}".format(query)
return prompt
# Load model
device = torch.device("cuda:0")
model_name = "/data/dell/xuyipei/my_llama/my_llama_13b/llama_13b_112_sft_v1"
tokenizer = LlamaTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16
)
model.eval()
model = model.to(device)
# Inference
history = []
queries = ['请推荐五本名著,依次列出作品名、作者\n', '请再来三本\n']
memory_limit = 3 # the number of (query, response) to remember
for query in queries:
prompt = generate_prompt(prompt, history)
input_ids = tokenizer(query, return_tensors="pt", padding=False, truncation=False, add_special_tokens=False)
input_ids = input_ids["input_ids"].to(device)
with torch.no_grad():
outputs=model.generate(
input_ids=input_ids,
top_p=0.8,
top_k=50,
repetition_penalty=1.1,
max_new_tokens = 256,
early_stopping = True,
eos_token_id = tokenizer.convert_tokens_to_ids('<end>'),
pad_token_id = tokenizer.eos_token_id,
min_length = input_ids.shape[1] + 1
)
s = outputs[0]
response=tokenizer.decode(s)
response = response.replace('<s>', '').replace('<end>', '').replace('</s>', '')
print(response)
history.append((query, response))
history = history[-memory_limit:]
``` |
s3nh/TinyLLama-v0-GGML | s3nh | 2023-07-26T07:54:44Z | 0 | 3 | null | [
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] | text-generation | 2023-07-26T07:52:30Z | ---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/Maykeye/TinyLLama-v0/tree/main).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
This is a first version of recreating roneneldan/TinyStories-1M but using Llama architecture.
* Full training process is included in the notebook train.ipynb. Recreating it as simple as downloading
TinyStoriesV2-GPT4-train.txt and TinyStoriesV2-GPT4-valid.txt in the same folder with the notebook and running
the cells. Validation content is not used by the script so you put anythin in
* Backup directory has a script do_backup that I used to copy weights from remote machine to local.
Weight are generated too quickly, so by the time script copied weihgt N+1
* This is extremely PoC version. Training truncates stories that are longer than context size and doesn't use
any sliding window to train story not from the start
* Training took approximately 9 hours (3 hours per epoch) on 40GB A100. ~30GB VRAM was used
* I use tokenizer from open_llama_3b. However I had troubles with it locally(https://github.com/openlm-research/open_llama/issues/69).
I had no troubles on the cloud machine with preninstalled libraries.
* Demo script is demo.py
* Validation script is provided: valid.py. use it like `python valid.py path/to/TinyStoriesV2-GPT4-valid.txt [optional-model-id-or-path]`:
After training I decided that it's not necessary to beat validation into chunks
* Also this version uses very stupid caching mechinsm to shuffle stories for training: it keeps cache of N recently loaded chunks
so if random shuffle asks for a story, it may use cache or load chunk.
Training dataset is too small, so in next versions I will get rid of it.
from transformers import AutoModelForCausalLM, AutoTokenizer
|
Yaopu/bert-finetuned-squad | Yaopu | 2023-07-26T07:50:46Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-25T02:14:03Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Envoid/Dendrite-22B-checkpoint2-ggml | Envoid | 2023-07-26T07:46:56Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-26T06:01:24Z | # Warning: This model is rather unpredictable.
Dendrite-22B is the 'rehabilitated' version of MindFlay-22B.
It is still a work in progress but results have been rather interseting.
It's not as good at following role-play type instructions anymore but it excells at creative tasks now.
It's ability to respond to lengthy context has been almost completely restored, although,, when addressing certain subjects it becomes somewhat incoherent so further training is planned to complete the 'rehabiliation' before the FP16 version will be released.
This repo will contain the q4_0 and q8_0 ggml quantizations of the model at this stage.
|
xianbin/ppo-Huggy | xianbin | 2023-07-26T07:41:15Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-26T07:41:11Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: xianbin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
shan2003/llama2-qlora-finetunined-LAW | shan2003 | 2023-07-26T07:37:53Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-26T07:37:48Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
noelmathewisaac/inspirational-quotes-distilgpt2 | noelmathewisaac | 2023-07-26T07:33:47Z | 181 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ## About
`Distilgpt2` model finetuned on a dataset of inspirational/motivational quotes taken from the [Quotes-500K](https://github.com/ShivaliGoel/Quotes-500K) dataset. The model can generate inspirational quotes, many of which sound quite realistic.
## Code for Training
The code for fine-tuning the model can be found in this repo: https://github.com/Quotify-Bot/model-training.
## Training Details
The model was fine-tuned for **50 epochs** on Google Colab's GPU using about **100,000 quotes** from the original dataset.
## Some Interesting Quotes
**Prompt**: Friendship is like
> Friendship is like a flower. when it blooms, it beautifies this world with its fragrance.
**Prompt**: Life is like
> Life is like travelling through time so stop being afraid of taking a chance and start appreciating where you are in life.
**Prompt**: Motivation
> Motivation will drive you to action, which in turn attracts inspiration from beyond.
**Prompt**: In the end
> In the end, it is necessary to discover your inner beauty and truth. |
YojitShinde/ppo-LunarLander | YojitShinde | 2023-07-26T07:32:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-26T07:32:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.90 +/- 16.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Samalabama66/PyramidsTraining | Samalabama66 | 2023-07-26T07:30:39Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-26T07:30:37Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Samalabama66/PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bobobert4/ppo-clean-LunarLander-v2 | bobobert4 | 2023-07-26T07:26:34Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-26T05:25:31Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -4.59 +/- 132.08
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 117
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 150000
'learning_rate': 0.0001
'num_envs': 4
'num_steps': 8
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 6
'update_epochs': 8
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'bobobert4/ppo-clean-LunarLander-v2'
'batch_size': 32
'minibatch_size': 5}
```
|
sjrhuschlee/flan-t5-base-mnli | sjrhuschlee | 2023-07-26T07:25:53Z | 256 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text-classification",
"mnli",
"zero-shot-classification",
"custom_code",
"en",
"dataset:multi_nli",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2023-07-09T08:58:13Z | ---
language:
- en
license: mit
datasets:
- multi_nli
library_name: transformers
pipeline_tag: zero-shot-classification
tags:
- t5
- text-classification
- mnli
model-index:
- name: sjrhuschlee/flan-t5-base-mnli
results:
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: MultiNLI-matched
type: multi_nli
config: default
split: validation_matched
metrics:
- type: accuracy
value: 87.468
name: Accuracy
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: MultiNLI-mismatched
type: multi_nli
config: default
split: validation_mismatched
metrics:
- type: accuracy
value: 87.276
name: Accuracy
---
# flan-t5-base-mnli
flan-t5-base-mnli is the [flan-T5 base model](https://huggingface.co/google/flan-t5-base) fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)](https://huggingface.co/datasets/multi_nli) corpus.
## Overview
- **License:** MIT
- **Language model:** flan-t5-base
- **Language:** English
- **Downstream-task:** Zero-shot Classification, Text Classification
- **Training data:** MNLI
- **Eval data:** MNLI (Matched and Mismatched)
- **Infrastructure**: 1x NVIDIA 3070
## Model Usage
Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so:
```python
from transformers import pipeline
classifier = pipeline(
'zero-shot-classification',
model='sjrhuschlee/flan-t5-base-mnli',
trust_remote_code=True,
)
```
You can then use this pipeline to classify sequences into any of the class names you specify. For example:
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
# {'sequence': 'one day I will see the world',
# 'labels': ['travel', 'cooking', 'dancing'],
# 'scores': [0.7944864630699158, 0.10624771565198898, 0.09926578402519226]}
```
## Metrics
```bash
# MNLI
{
"eval_accuracy": 0.8746816097809476,
"eval_accuracy_mm": 0.8727624084621644,
"eval_loss": 0.4271220564842224,
"eval_loss_mm": 0.4265698492527008,
"eval_samples": 9815,
"eval_samples_mm": 9832,
}
```
## Uses
#### Direct Use
This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification, and zero-shot sequence classification.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
sequence_to_classify = "The CEO had a strong handshake."
candidate_labels = ['male', 'female']
hypothesis_template = "This text speaks about a {} profession."
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
```
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. |
xianbin/q-Taxi-v3 | xianbin | 2023-07-26T07:21:58Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-26T07:21:55Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="xianbin/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
xianbin/q-FrozenLake-v1-4x4-noSlippery | xianbin | 2023-07-26T07:09:20Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-26T07:09:18Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="xianbin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits