modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 501
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 18:25:37
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Word2vec/polyglot_words_embeddings_slk | Word2vec | 2023-05-28T19:05:22Z | 0 | 0 | null | [
"word2vec",
"sk",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:07:22Z | ---
tags:
- word2vec
language: sk
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
```
|
Word2vec/polyglot_words_embeddings_scn | Word2vec | 2023-05-28T19:04:38Z | 0 | 0 | null | [
"word2vec",
"scn",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:07:01Z | ---
tags:
- word2vec
language: scn
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
```
|
Word2vec/polyglot_words_embeddings_szl | Word2vec | 2023-05-28T19:03:02Z | 0 | 0 | null | [
"word2vec",
"szl",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:07:55Z | ---
tags:
- word2vec
language: szl
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
```
|
Word2vec/polyglot_words_embeddings_tam | Word2vec | 2023-05-28T19:01:26Z | 0 | 0 | null | [
"word2vec",
"ta",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:07:59Z | ---
tags:
- word2vec
language: ta
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
Word2vec/polyglot_words_embeddings_tgk | Word2vec | 2023-05-28T18:55:01Z | 0 | 0 | null | [
"word2vec",
"tg",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:08:09Z | ---
tags:
- word2vec
language: tg
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
Word2vec/polyglot_words_embeddings_tha | Word2vec | 2023-05-28T18:54:49Z | 0 | 0 | null | [
"word2vec",
"th",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:08:12Z | ---
tags:
- word2vec
language: th
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
Word2vec/polyglot_words_embeddings_tuk | Word2vec | 2023-05-28T18:54:33Z | 0 | 0 | null | [
"word2vec",
"tk",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:08:16Z | ---
tags:
- word2vec
language: tk
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
Word2vec/polyglot_words_embeddings_tgl | Word2vec | 2023-05-28T18:54:23Z | 0 | 0 | null | [
"word2vec",
"tl",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:08:20Z | ---
tags:
- word2vec
language: tl
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
Word2vec/polyglot_words_embeddings_uzb | Word2vec | 2023-05-28T18:53:21Z | 0 | 0 | null | [
"word2vec",
"uz",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:08:46Z | ---
tags:
- word2vec
language: uz
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
Word2vec/polyglot_words_embeddings_vec | Word2vec | 2023-05-28T18:53:11Z | 0 | 0 | null | [
"word2vec",
"vec",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:08:49Z | ---
tags:
- word2vec
language: vec
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
Word2vec/polyglot_words_embeddings_vol | Word2vec | 2023-05-28T18:52:13Z | 0 | 0 | null | [
"word2vec",
"vo",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:09:02Z | ---
tags:
- word2vec
language: vo
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
Word2vec/polyglot_words_embeddings_wln | Word2vec | 2023-05-28T18:50:39Z | 0 | 0 | null | [
"word2vec",
"wa",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:09:05Z | ---
tags:
- word2vec
language: wa
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
Word2vec/polyglot_words_embeddings_yid | Word2vec | 2023-05-28T18:49:06Z | 0 | 0 | null | [
"word2vec",
"yi",
"license:gpl-3.0",
"region:us"
] | null | 2023-05-19T22:09:12Z | ---
tags:
- word2vec
language: yi
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
tonirodriguez/roberta-base-bne-finetuned-toxicity-tweets | tonirodriguez | 2023-05-28T18:36:49Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-28T16:45:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-toxicity-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-toxicity-tweets
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1345
- Accuracy: 0.9604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.18 | 1.0 | 229 | 0.1270 | 0.9559 |
| 0.0508 | 2.0 | 458 | 0.1345 | 0.9604 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
scroobiustrip/topic-model-v3 | scroobiustrip | 2023-05-28T18:31:52Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-28T16:39:10Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: topic-model-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-model-v3
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3458
- F1: 0.9015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.6305 | 1.0 | 11964 | 0.4299 | 0.8788 |
| 0.3877 | 2.0 | 23928 | 0.3623 | 0.8953 |
| 0.3173 | 3.0 | 35892 | 0.3458 | 0.9015 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hedronstone/6b-gpteacher-role-play-chatml-10epoch | hedronstone | 2023-05-28T18:06:07Z | 0 | 0 | null | [
"pytorch",
"tensorboard",
"generated_from_trainer",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T17:07:46Z | ---
license: creativeml-openrail-m
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6b-gpteacher-role-play-chatml-10epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6b-gpteacher-role-play-chatml-10epoch
This model is a fine-tuned version of [PygmalionAI/pygmalion-6b](https://huggingface.co/PygmalionAI/pygmalion-6b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9141
- Accuracy: 0.1596
- Entropy: 1.7788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 99
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Entropy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|
| 2.0504 | 1.0 | 238 | 2.0176 | 0.1563 | 1.9978 |
| 1.8932 | 2.0 | 476 | 1.9707 | 0.1584 | 1.9182 |
| 1.8611 | 3.0 | 714 | 1.9473 | 0.1602 | 1.8831 |
| 1.8206 | 4.0 | 952 | 1.9307 | 0.1604 | 1.8725 |
| 1.7936 | 5.0 | 1190 | 1.9238 | 0.1613 | 1.8354 |
| 1.7823 | 6.0 | 1428 | 1.9189 | 0.1618 | 1.8175 |
| 1.7742 | 7.0 | 1666 | 1.9150 | 0.1615 | 1.8082 |
| 1.762 | 8.0 | 1904 | 1.9141 | 0.1605 | 1.8145 |
| 1.7437 | 9.0 | 2142 | 1.9160 | 0.1604 | 1.7750 |
| 1.7358 | 10.0 | 2380 | 1.9141 | 0.1596 | 1.7788 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.3
|
damian0815/minigpt4-ff7r | damian0815 | 2023-05-28T17:40:34Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-05-28T15:53:12Z | ---
license: cc-by-nc-4.0
---
MiniGPT-4 checkpoint aligned with @panopstor's FF7R dataset (link in the EveryDream discord). Produces captions that are more useful for training SD datasets that MiniGPT4's default output.
Easiest way to use this is to launch a docker instance for [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui), eg `TheBloke/runpod-pytorch-runclick`, follow the instructions for MiniGPT-4 [here](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal). For now you'll need to manually edit `minigpt_pipeline.py` ([this line](`https://github.com/Wojtab/minigpt-4-pipeline/blob/16eda85c4bb15e2b1b05b20c55907a8ea2c06764/minigpt4_pipeline.py#L52) to point to [the .pth file in this repo](minigpt4-align-ff7r.pth) instead of the default.
## Dataset
adapted from the @panopstor's FF7R dataset - [zip here](cc_sbu_align_ff7r.zip)
## Sample output:


 |
Dr-BERT/CAS-Biomedical-POS-Tagging | Dr-BERT | 2023-05-28T17:38:50Z | 104 | 5 | transformers | [
"transformers",
"pytorch",
"camembert",
"token-classification",
"medical",
"fr",
"dataset:bigbio/cas",
"arxiv:2304.00958",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-04-05T06:19:37Z | ---
license: apache-2.0
datasets:
- bigbio/cas
language:
- fr
metrics:
- f1
library_name: transformers
tags:
- medical
widget:
- text: Patiente atteinte d’une pathologie chronique
- text: Vous êtes amené à prendre en charge un homme de 54 ans qui souffre d’une spondylarthrite ankylosante sévère.
---
<p align="center">
<img src="https://github.com/qanastek/DrBERT/blob/main/assets/logo.png?raw=true" alt="drawing" width="250"/>
</p>
- Corpora: [bigbio/cas](https://huggingface.co/datasets/bigbio/cas)
- Embeddings & Sequence Labelling: [DrBERT-7GB](https://arxiv.org/abs/2304.00958)
- Number of Epochs: 200
# DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains
In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains.
In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks.
Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained.
# CAS: French Corpus with Clinical Cases
| | Train | Dev | Test |
|:---------:|:-----:|:-----:|:-----:|
| Documents | 5,306 | 1,137 | 1,137 |
The ESSAIS (Dalloux et al., 2021) and CAS (Grabar et al., 2018) corpora respectively contain 13,848 and 7,580 clinical cases in French. Some clinical cases are associated with discussions. A subset of the whole set of cases is enriched with morpho-syntactic (part-of-speech (POS) tagging, lemmatization) and semantic (UMLS concepts, negation, uncertainty) annotations. In our case, we focus only on the POS tagging task.
# Model Metric
```plain
precision recall f1-score support
ABR 0.8683 0.8480 0.8580 171
ADJ 0.9634 0.9751 0.9692 4018
ADV 0.9935 0.9849 0.9892 926
DET:ART 0.9982 0.9997 0.9989 3308
DET:POS 1.0000 1.0000 1.0000 133
INT 1.0000 0.7000 0.8235 10
KON 0.9883 0.9976 0.9929 845
NAM 0.9144 0.9353 0.9247 834
NOM 0.9827 0.9803 0.9815 7980
NUM 0.9825 0.9845 0.9835 1422
PRO:DEM 0.9924 1.0000 0.9962 131
PRO:IND 0.9630 1.0000 0.9811 78
PRO:PER 0.9948 0.9931 0.9939 579
PRO:REL 1.0000 0.9908 0.9954 109
PRP 0.9989 0.9982 0.9985 3785
PRP:det 1.0000 0.9985 0.9993 681
PUN 0.9996 0.9958 0.9977 2376
PUN:cit 0.9756 0.9524 0.9639 84
SENT 1.0000 0.9974 0.9987 1174
SYM 0.9495 1.0000 0.9741 94
VER:cond 1.0000 1.0000 1.0000 11
VER:futu 1.0000 0.9444 0.9714 18
VER:impf 1.0000 0.9963 0.9981 804
VER:infi 1.0000 0.9585 0.9788 193
VER:pper 0.9742 0.9564 0.9652 1261
VER:ppre 0.9617 0.9901 0.9757 203
VER:pres 0.9833 0.9904 0.9868 830
VER:simp 0.9123 0.7761 0.8387 67
VER:subi 1.0000 0.7000 0.8235 10
VER:subp 1.0000 0.8333 0.9091 18
accuracy 0.9842 32153
macro avg 0.9799 0.9492 0.9623 32153
weighted avg 0.9843 0.9842 0.9842 32153
```
# Citation BibTeX
```bibtex
@inproceedings{labrak2023drbert,
title = {{DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains}},
author = {Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Rouvier, Mickael and Morin, Emmanuel and Daille, Béatrice and Gourraud, Pierre-Antoine},
booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL'23), Long Paper},
month = july,
year = 2023,
address = {Toronto, Canada},
publisher = {Association for Computational Linguistics}
}
```
|
Dr-BERT/DrBERT-4GB-CP-CamemBERT | Dr-BERT | 2023-05-28T17:38:22Z | 0 | 0 | transformers | [
"transformers",
"medical",
"chemistry",
"biomedical",
"life science",
"fr",
"dataset:Dr-BERT/NACHOS",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-01-09T21:13:37Z | ---
license: apache-2.0
datasets:
- Dr-BERT/NACHOS
language:
- fr
library_name: transformers
tags:
- medical
- chemistry
- biomedical
- life science
---
<p align="center">
<img src="https://github.com/qanastek/DrBERT/blob/main/assets/logo.png?raw=true" alt="drawing" width="250"/>
</p>
# DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains
In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains.
In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks.
Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained.
# 1. DrBERT models
**DrBERT** is a French RoBERTa trained on a open source corpus of French medical crawled textual data called NACHOS. Models with different amount of data from differents public and private sources are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French supercomputer. Only the weights of the models trained using exclusively open-sources data are publicly released to prevent any personnal information leak and to follow the european GDPR laws :
| Model name | Corpus | Number of layers | Attention Heads | Embedding Dimension | Sequence Length | Model URL |
| :------: | :---: | :---: | :---: | :---: | :---: | :---: |
| `DrBERT-7-GB-cased-Large` | NACHOS 7 GB | 24 | 16 | 1024 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-7GB-Large) |
| `DrBERT-7-GB-cased` | NACHOS 7 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-7GB) |
| `DrBERT-4-GB-cased` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB) |
| `DrBERT-4-GB-cased-CP-CamemBERT` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB-CP-CamemBERT) |
| `DrBERT-4-GB-cased-CP-PubMedBERT` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB-CP-PubMedBERT) |
# 2. Using DrBERT
You can use DrBERT with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Dr-BERT/DrBERT-7GB")
model = AutoModel.from_pretrained("Dr-BERT/DrBERT-7GB")
```
Perform the mask filling task :
```python
from transformers import pipeline
fill_mask = pipeline("fill-mask", model="Dr-BERT/DrBERT-7GB", tokenizer="Dr-BERT/DrBERT-7GB")
results = fill_mask("La patiente est atteinte d'une <mask>")
```
# 3. Pre-training DrBERT tokenizer and model from scratch by using HuggingFace Transformers Library
## 3.1 Install dependencies
```bash
accelerate @ git+https://github.com/huggingface/accelerate@66edfe103a0de9607f9b9fdcf6a8e2132486d99b
datasets==2.6.1
sentencepiece==0.1.97
protobuf==3.20.1
evaluate==0.2.2
tensorboard==2.11.0
torch >= 1.3
```
## 3.2 Download NACHOS Dataset text file
Download the full NACHOS dataset from [Zenodo]() and place it the the `from_scratch` or `continued_pretraining` directory.
## 3.3 Build your own tokenizer from scratch based on NACHOS
Note : This step is required only in the case of an from scratch pre-training, if you want to do a continued pre-training you just have to download the model and the tokenizer that correspond to the model you want to continue the training from. In this case, you simply have to go to the HuggingFace Hub, select a model (for example [RoBERTa-base](https://huggingface.co/roberta-base)). Finally, you have to download the entire model / tokenizer repository by clicking on the `Use In Transformers` button and get the Git link `git clone https://huggingface.co/roberta-base`.
Build the tokenizer from scratch on your data of the file `./corpus.txt` by using `./build_tokenizer.sh`.
## 3.4 Preprocessing and tokenization of the dataset
First, replace the field `tokenizer_path` of the shell script to match the path of your tokenizer directory downloaded before using HuggingFace Git or the one you have build.
Run `./preprocessing_dataset.sh` to generate the tokenized dataset by using the givent tokenizer.
## 3.5 Model training
First, change the number of GPUs `--ntasks=128` you are needing to match your computational capabilities in the shell script called `run_training.sh`. In our case, we used 128 V100 32 GB GPUs from 32 nodes of 4 GPUs (`--ntasks-per-node=4` and `--gres=gpu:4`) during 20 hours (`--time=20:00:00`).
If you are using Jean Zay, you also need to change the `-A` flag to match one of your `@gpu` profile capable of running the job. You also need to move **ALL** of your datasets, tokenizer, script and outputs on the `$SCRATCH` disk space to preserve others users of suffuring of IO issues.
### 3.5.1 Pre-training from scratch
Once the SLURM parameters updated, you have to change name of the model architecture in the flag `--model_type="camembert"` and to update the `--config_overrides=` according to the specifications of the architecture you are trying to train. In our case, RoBERTa had a `514` sequence length, a vocabulary of `32005` (32K tokens of the tokenizer and 5 of the model architecture) tokens, the identifier of the beginning-of-sentence token (BOS) and end-of-sentence token (EOS) are respectivly `5` and `6`. Change the
Then, go to `./from_scratch/` directory.
Run `sbatch ./run_training.sh` to send the training job in the SLURM queue.
### 3.5.2 continue pre-training
Once the SLURM parameters updated, you have to change path of the model / tokenizer you want to start from `--model_name_or_path=` / `--tokenizer_name=` to the path of the model downloaded from HuggingFace's Git in the section 3.3.
Then, go to `./continued_pretraining/` directory.
Run `sbatch ./run_training.sh` to send the training job in the SLURM queue.
# 4. Fine-tuning on a downstream task
You just need to change the name of the model to `Dr-BERT/DrBERT-7GB` in any of the examples given by HuggingFace's team [here](https://huggingface.co/docs/transformers/tasks/sequence_classification).
# Citation BibTeX
```bibtex
@inproceedings{labrak2023drbert,
title = {{DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains}},
author = {Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Rouvier, Mickael and Morin, Emmanuel and Daille, Béatrice and Gourraud, Pierre-Antoine},
booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL'23), Long Paper},
month = july,
year = 2023,
address = {Toronto, Canada},
publisher = {Association for Computational Linguistics}
}
```
|
Dr-BERT/DrBERT-7GB | Dr-BERT | 2023-05-28T17:37:44Z | 1,467 | 12 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"fill-mask",
"medical",
"chemistry",
"biomedical",
"life science",
"fr",
"dataset:Dr-BERT/NACHOS",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-25T22:05:07Z | ---
license: apache-2.0
datasets:
- Dr-BERT/NACHOS
language:
- fr
library_name: transformers
tags:
- medical
- chemistry
- biomedical
- life science
widget:
- text: "Le patient est atteint d'une <mask>."
---
<p align="center">
<img src="https://github.com/qanastek/DrBERT/blob/main/assets/logo.png?raw=true" alt="drawing" width="250"/>
</p>
# DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains
In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains.
In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks.
Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained.
# 1. DrBERT models
**DrBERT** is a French RoBERTa trained on a open source corpus of French medical crawled textual data called NACHOS. Models with different amount of data from differents public and private sources are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French supercomputer. Only the weights of the models trained using exclusively open-sources data are publicly released to prevent any personnal information leak and to follow the european GDPR laws :
| Model name | Corpus | Number of layers | Attention Heads | Embedding Dimension | Sequence Length | Model URL |
| :------: | :---: | :---: | :---: | :---: | :---: | :---: |
| `DrBERT-7-GB-cased-Large` | NACHOS 7 GB | 24 | 16 | 1024 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-7GB-Large) |
| `DrBERT-7-GB-cased` | NACHOS 7 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-7GB) |
| `DrBERT-4-GB-cased` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB) |
| `DrBERT-4-GB-cased-CP-CamemBERT` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB-CP-CamemBERT) |
| `DrBERT-4-GB-cased-CP-PubMedBERT` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB-CP-PubMedBERT) |
# 2. Using DrBERT
You can use DrBERT with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Dr-BERT/DrBERT-7GB")
model = AutoModel.from_pretrained("Dr-BERT/DrBERT-7GB")
```
Perform the mask filling task :
```python
from transformers import pipeline
fill_mask = pipeline("fill-mask", model="Dr-BERT/DrBERT-7GB", tokenizer="Dr-BERT/DrBERT-7GB")
results = fill_mask("La patiente est atteinte d'une <mask>")
```
# 3. Pre-training DrBERT tokenizer and model from scratch by using HuggingFace Transformers Library
## 3.1 Install dependencies
```bash
accelerate @ git+https://github.com/huggingface/accelerate@66edfe103a0de9607f9b9fdcf6a8e2132486d99b
datasets==2.6.1
sentencepiece==0.1.97
protobuf==3.20.1
evaluate==0.2.2
tensorboard==2.11.0
torch >= 1.3
```
## 3.2 Download NACHOS Dataset text file
Download the full NACHOS dataset from [Zenodo]() and place it the the `from_scratch` or `continued_pretraining` directory.
## 3.3 Build your own tokenizer from scratch based on NACHOS
Note : This step is required only in the case of an from scratch pre-training, if you want to do a continued pre-training you just have to download the model and the tokenizer that correspond to the model you want to continue the training from. In this case, you simply have to go to the HuggingFace Hub, select a model (for example [RoBERTa-base](https://huggingface.co/roberta-base)). Finally, you have to download the entire model / tokenizer repository by clicking on the `Use In Transformers` button and get the Git link `git clone https://huggingface.co/roberta-base`.
Build the tokenizer from scratch on your data of the file `./corpus.txt` by using `./build_tokenizer.sh`.
## 3.4 Preprocessing and tokenization of the dataset
First, replace the field `tokenizer_path` of the shell script to match the path of your tokenizer directory downloaded before using HuggingFace Git or the one you have build.
Run `./preprocessing_dataset.sh` to generate the tokenized dataset by using the givent tokenizer.
## 3.5 Model training
First, change the number of GPUs `--ntasks=128` you are needing to match your computational capabilities in the shell script called `run_training.sh`. In our case, we used 128 V100 32 GB GPUs from 32 nodes of 4 GPUs (`--ntasks-per-node=4` and `--gres=gpu:4`) during 20 hours (`--time=20:00:00`).
If you are using Jean Zay, you also need to change the `-A` flag to match one of your `@gpu` profile capable of running the job. You also need to move **ALL** of your datasets, tokenizer, script and outputs on the `$SCRATCH` disk space to preserve others users of suffuring of IO issues.
### 3.5.1 Pre-training from scratch
Once the SLURM parameters updated, you have to change name of the model architecture in the flag `--model_type="camembert"` and to update the `--config_overrides=` according to the specifications of the architecture you are trying to train. In our case, RoBERTa had a `514` sequence length, a vocabulary of `32005` (32K tokens of the tokenizer and 5 of the model architecture) tokens, the identifier of the beginning-of-sentence token (BOS) and end-of-sentence token (EOS) are respectivly `5` and `6`. Change the
Then, go to `./from_scratch/` directory.
Run `sbatch ./run_training.sh` to send the training job in the SLURM queue.
### 3.5.2 continue pre-training
Once the SLURM parameters updated, you have to change path of the model / tokenizer you want to start from `--model_name_or_path=` / `--tokenizer_name=` to the path of the model downloaded from HuggingFace's Git in the section 3.3.
Then, go to `./continued_pretraining/` directory.
Run `sbatch ./run_training.sh` to send the training job in the SLURM queue.
# 4. Fine-tuning on a downstream task
You just need to change the name of the model to `Dr-BERT/DrBERT-7GB` in any of the examples given by HuggingFace's team [here](https://huggingface.co/docs/transformers/tasks/sequence_classification).
# Citation BibTeX
```bibtex
@inproceedings{labrak2023drbert,
title = {{DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains}},
author = {Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Rouvier, Mickael and Morin, Emmanuel and Daille, Béatrice and Gourraud, Pierre-Antoine},
booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL'23), Long Paper},
month = july,
year = 2023,
address = {Toronto, Canada},
publisher = {Association for Computational Linguistics}
}
```
|
ChristianMDahl/segFormer_ver1_horizontal | ChristianMDahl | 2023-05-28T17:31:58Z | 31 | 0 | transformers | [
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-05-28T07:00:34Z | ---
license: other
tags:
- generated_from_keras_callback
model-index:
- name: ChristianMDahl/segFormer_ver1_horizontal
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ChristianMDahl/segFormer_ver1_horizontal
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1767
- Validation Loss: 0.1918
- Epoch: 19
## Model description
Model for **horizontal** lines
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3302 | 0.2673 | 0 |
| 0.2515 | 0.2329 | 1 |
| 0.2335 | 0.2197 | 2 |
| 0.2226 | 0.2125 | 3 |
| 0.2153 | 0.2083 | 4 |
| 0.2105 | 0.2039 | 5 |
| 0.2061 | 0.2023 | 6 |
| 0.2025 | 0.2013 | 7 |
| 0.1995 | 0.2015 | 8 |
| 0.1960 | 0.1976 | 9 |
| 0.1938 | 0.1966 | 10 |
| 0.1909 | 0.1973 | 11 |
| 0.1882 | 0.1936 | 12 |
| 0.1865 | 0.1951 | 13 |
| 0.1845 | 0.1942 | 14 |
| 0.1826 | 0.1953 | 15 |
| 0.1810 | 0.1934 | 16 |
| 0.1794 | 0.1928 | 17 |
| 0.1782 | 0.1919 | 18 |
| 0.1767 | 0.1918 | 19 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.10.1
- Datasets 2.12.0
- Tokenizers 0.13.0.dev0
|
JoseVerutti/uao-distilroberta-base-mrpc-glue-verutti-benjumea-lopez | JoseVerutti | 2023-05-28T17:26:42Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-28T17:23:23Z | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: uao-distilroberta-base-mrpc-glue-verutti-benjumea-lopez
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.821078431372549
- name: F1
type: f1
value: 0.8717047451669596
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uao-distilroberta-base-mrpc-glue-verutti-benjumea-lopez
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5776
- Accuracy: 0.8211
- F1: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5197 | 1.09 | 500 | 0.5776 | 0.8211 | 0.8717 |
| 0.35 | 2.18 | 1000 | 0.5931 | 0.8309 | 0.8752 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dimi1357/poca-SoccerTwos | dimi1357 | 2023-05-28T17:20:46Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-05-28T17:20:41Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: dimi1357/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
thackerhelik/Reinforce-Cartpole-v1 | thackerhelik | 2023-05-28T16:56:26Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T16:56:16Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nolanaatama/embeddings | nolanaatama | 2023-05-28T16:55:44Z | 0 | 188 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-16T11:01:33Z | ---
license: creativeml-openrail-m
---
|
smartik/mbart-large-50-finetuned-ua-gec-2.1 | smartik | 2023-05-28T16:50:37Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-28T15:12:13Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mbart-large-50-finetuned-ua-gec-2.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-ua-gec-2.1
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5376
- Rouge1: 18.2963
- Rouge2: 10.2365
- Rougel: 18.2593
- Rougelsum: 18.2759
- Gen Len: 28.6107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0765 | 1.0 | 1010 | 0.4070 | 18.2963 | 10.2365 | 18.2593 | 18.2759 | 28.522 |
| 0.046 | 2.0 | 2020 | 0.4710 | 18.2963 | 10.2365 | 18.2593 | 18.2759 | 28.578 |
| 0.0291 | 3.0 | 3030 | 0.4885 | 18.2833 | 10.2052 | 18.2454 | 18.263 | 28.5793 |
| 0.0188 | 4.0 | 4040 | 0.5145 | 18.2963 | 10.2365 | 18.2593 | 18.2759 | 28.6127 |
| 0.0117 | 5.0 | 5050 | 0.5376 | 18.2963 | 10.2365 | 18.2593 | 18.2759 | 28.6107 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dimi1357/rl_course_vizdoom_health_gathering_supreme | dimi1357 | 2023-05-28T16:47:22Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T16:36:32Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.18 +/- 5.56
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r dimi1357/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Word2vec/fauconnier_frWac_postag_no_phrase_1000_skip_cut100 | Word2vec | 2023-05-28T16:44:35Z | 0 | 0 | null | [
"word2vec",
"fr",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T21:01:24Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
---
### Description
A French word2vec model trained on [FrWac](https://wacky.sslmit.unibo.it/doku.php?id=corpora) by Fauconnier with the following hyperparameters:
lem: yes, pos: yes, phrase: no, train: skip, dim: 1000, cutoff: 100
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
Swyhn/ppo-LunarLander-v2 | Swyhn | 2023-05-28T16:44:06Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T16:43:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.64 +/- 13.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Word2vec/fauconnier_frWac_non_lem_no_postag_no_phrase_500_skip_cut200 | Word2vec | 2023-05-28T16:42:43Z | 0 | 0 | null | [
"word2vec",
"fr",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T20:30:13Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
---
### Description
A French word2vec model trained on [FrWac](https://wacky.sslmit.unibo.it/doku.php?id=corpora) by Fauconnier with the following hyperparameters:
lem: no, pos: no, phrase: no, train: skip, dim: 500, cutoff: 200
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
Word2vec/fauconnier_frWac_non_lem_no_postag_no_phrase_500_skip_cut100 | Word2vec | 2023-05-28T16:40:27Z | 0 | 0 | null | [
"word2vec",
"fr",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T20:28:22Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
---
### Description
A French word2vec model trained on [FrWac](https://wacky.sslmit.unibo.it/doku.php?id=corpora) by Fauconnier with the following hyperparameters:
lem: no, pos: no, phrase: no, train: skip, dim: 500, cutoff: 100
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
Word2vec/fauconnier_frWac_no_postag_no_phrase_500_cbow_cut100 | Word2vec | 2023-05-28T16:39:32Z | 0 | 0 | null | [
"word2vec",
"fr",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T20:23:23Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
---
### Description
A French word2vec model trained on [FrWac](https://wacky.sslmit.unibo.it/doku.php?id=corpora) by Fauconnier with the following hyperparameters:
lem: yes, pos: no, phrase: no, train: cbow, dim: 500, cutoff: 100
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
asenella/mmnist_MMVAEPlusconfig2_seed_3_ratio_05_c | asenella | 2023-05-28T16:38:07Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-25T09:40:14Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Word2vec/fauconnier_frWac_non_lem_no_postag_no_phrase_200_cbow_cut0 | Word2vec | 2023-05-28T16:38:06Z | 0 | 0 | null | [
"word2vec",
"fr",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T20:19:42Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
---
### Description
A French word2vec model trained on [FrWac](https://wacky.sslmit.unibo.it/doku.php?id=corpora) by Fauconnier with the following hyperparameters:
lem: no, pos: no, phrase: no, train: cbow, dim: 200, cutoff: 0
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
Peraboom/SBertV1 | Peraboom | 2023-05-28T16:36:37Z | 103 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-28T16:25:24Z | ---
license: other
---
This is distilled model from Bert Base uncased. It has 6 layers, 6 heads and 384 hidden Size. It has 29.8M parameter. Performance wise, it has the potential of 87% performance of bert base with has 12 layers and 12 heads with 110M parameters. |
thiendio/ppo-lunearlander-v2-rl-course-unit1 | thiendio | 2023-05-28T16:31:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T16:31:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.51 +/- 14.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Word2vec/fauconnier_frWac_non_lem_no_postag_no_phrase_200_cbow_cut100 | Word2vec | 2023-05-28T16:30:53Z | 0 | 0 | null | [
"word2vec",
"fr",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T19:14:01Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
---
### Description
A French word2vec model trained on [FrWac](https://wacky.sslmit.unibo.it/doku.php?id=corpora) by Fauconnier with the following hyperparameters:
lem: no, pos: no, phrase: no, train: cbow, dim: 200, cutoff: 100
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
ImPavloh/Streamers-AI-Voices | ImPavloh | 2023-05-28T16:29:58Z | 0 | 0 | null | [
"music",
"audio-to-audio",
"license:other",
"region:us"
] | audio-to-audio | 2023-05-28T16:16:20Z | ---
license: other
pipeline_tag: audio-to-audio
tags:
- music
---
# Modelos de Streamers utilizados en [VoiceIt](https://huggingface.co/spaces/ImPavloh/voiceit)
Estos modelos de Streamers son utilizados en [VoiceIt](https://voiceit.pavloh.com), una plataforma desarrollada por [Pavloh](https://twitter.com/ImPavloh) que permite la transformación de voz a voz.
# Tecnología empleada
Este proyecto ha sido creado utilizando SoftVC VITS Singing Voice Conversion (versión 4.0), una tecnología de vanguardia para la conversión de voz cantada.
Todos los modelos de este repositorio han sido creados por mi, hay mejores modelos en [este repositorio](https://huggingface.co/QuickWick/Music-AI-Voices) creados por otra gente.
Si necesitas más información o tienes alguna duda, no dudes en ponerte en contacto conmigo [Pavloh](https://twitter.com/ImPavloh).
## ⚠️ Importante | Las voces generadas no deben tener derechos de autor. |
Word2vec/fauconnier_frWiki_no_phrase_no_postag_700_cbow_cut100 | Word2vec | 2023-05-28T16:25:37Z | 0 | 0 | null | [
"word2vec",
"fr",
"dataset:wikipedia",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T17:30:36Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
datasets:
- wikipedia
---
### Description
A French word2vec model trained on [frwiki](https://dumps.wikimedia.org/frwiki/) by Fauconnier with the following hyperparameters:
lem: yes, pos: no, phrase: no, train: cbow, dim: 700, cutoff: 100
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
Word2vec/fauconnier_frWiki_no_lem_no_postag_no_phrase_1000_cbow_cut200 | Word2vec | 2023-05-28T16:25:09Z | 0 | 0 | null | [
"word2vec",
"fr",
"dataset:wikipedia",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T17:34:52Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
datasets:
- wikipedia
---
### Description
A French word2vec model trained on [frwiki](https://dumps.wikimedia.org/frwiki/) by Fauconnier with the following hyperparameters:
lem: no, pos: no, phrase: no, train: cbow, dim: 1000, cutoff: 200
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
Word2vec/fauconnier_frWiki_no_lem_no_postag_no_phrase_1000_cbow_cut100 | Word2vec | 2023-05-28T16:24:11Z | 0 | 0 | null | [
"word2vec",
"fr",
"dataset:wikipedia",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T17:41:24Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
datasets:
- wikipedia
---
### Description
A French word2vec model trained on [frwiki](https://dumps.wikimedia.org/frwiki/) by Fauconnier with the following hyperparameters:
lem: no, pos: no, phrase: no, train: cbow, dim: 1000, cutoff: 100
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
Word2vec/fauconnier_frWiki_no_lem_no_postag_no_phrase_1000_skip_cut100 | Word2vec | 2023-05-28T16:22:42Z | 0 | 0 | null | [
"word2vec",
"fr",
"dataset:wikipedia",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T17:34:29Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
datasets:
- wikipedia
---
### Description
A French word2vec model trained on [frwiki](https://dumps.wikimedia.org/frwiki/) by Fauconnier with the following hyperparameters:
lem: np, pos: no, phrase: no, train: skip, dim: 1000, cutoff: 100
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
Word2vec/fauconnier_frWiki_no_phrase_no_postag_500_cbow_cut10 | Word2vec | 2023-05-28T16:21:24Z | 0 | 0 | null | [
"word2vec",
"fr",
"dataset:wikipedia",
"license:cc-by-3.0",
"region:us"
] | null | 2023-05-16T17:32:35Z | ---
tags:
- word2vec
language: fr
license: cc-by-3.0
datasets:
- wikipedia
---
### Description
A French word2vec model trained on [frwiki](https://dumps.wikimedia.org/frwiki/) by Fauconnier with the following hyperparameters:
lem: yes, pos: no, phrase: no, train: cbow, dim: 500, cutoff: 10
### How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format("Word2vec/fauconnier_frWiki_no_phrase_no_postag_1000_skip_cut200.bin", binary=True, unicode_errors="ignore")
model.most_similar("intéressant_a")
```
### Citation
```
@misc{fauconnier_2015,
author = {Fauconnier, Jean-Philippe},
title = {French Word Embeddings},
url = {http://fauconnier.github.io},
year = {2015}}
``` |
xzuyn/Pythia-Deduped-410M-GGML | xzuyn | 2023-05-28T16:15:09Z | 0 | 0 | null | [
"gpt_neox",
"region:us"
] | null | 2023-05-28T14:43:59Z | ---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/EleutherAI/pythia-410m-deduped |
hermanshid/stable-diffusion-v1-5-fine-tuned-indonesia | hermanshid | 2023-05-28T16:12:31Z | 32 | 1 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-28T13:55:16Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: >-
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* |
fedorn/ppo-Huggy | fedorn | 2023-05-28T16:11:05Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-05-28T16:10:57Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: fedorn/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Yhyu13/baize-v2-13b-gptq-4bit | Yhyu13 | 2023-05-28T15:54:40Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-27T15:51:08Z | ---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original weight : https://huggingface.co/project-baize/baize-v2-7b
Baize is a lora training framework that allows fine-tuning LLaMA models on commondity GPUs.
Checkout my 7B baize gptq 4bit here : https://huggingface.co/Yhyu13/baize-v2-7b-gptq-4bit |
YakovElm/Hyperledger15Classic_512 | YakovElm | 2023-05-28T15:51:00Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-28T15:50:25Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger15Classic_512
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger15Classic_512
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2806
- Train Accuracy: 0.9035
- Validation Loss: 0.3198
- Validation Accuracy: 0.8807
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3217 | 0.8952 | 0.3253 | 0.8807 | 0 |
| 0.2967 | 0.9035 | 0.3233 | 0.8807 | 1 |
| 0.2806 | 0.9035 | 0.3198 | 0.8807 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Yhyu13/chronos-13b-gptq-4bit | Yhyu13 | 2023-05-28T15:49:52Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-28T15:40:28Z | ---
license: apache-2.0
---
GPTQ 4 bit no actor order made compatible with ooba's textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original merged hf weights : https://huggingface.co/elinas/chronos-13b
Chronos is a llama based model fine tuned to generate vivi conversations, role playing, and story telling
Here is sample converstaion generated by chrono, which vividly describe two perople from pro-gun and anti-gun faction.


 |
ITZDOZZEN/sd-class-butterflies-32 | ITZDOZZEN | 2023-05-28T15:06:34Z | 5 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-05-28T15:05:40Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
My first model ever, the beginning of my ai journey.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ITZDOZZEN/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
thanut/skin | thanut | 2023-05-28T14:47:09Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"license:afl-3.0",
"region:us"
] | null | 2023-05-28T14:39:36Z | ---
license: afl-3.0
metrics:
- accuracy
library_name: adapter-transformers
--- |
BayesBayes/codeparrot-ds | BayesBayes | 2023-05-28T14:33:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-26T22:22:53Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
EinfachOlder/EinfachAlex | EinfachOlder | 2023-05-28T14:26:36Z | 0 | 0 | null | [
"de",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:OpenAssistant/oasst1",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"license:apache-2.0",
"region:us"
] | null | 2023-05-28T14:17:54Z | ---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
- OpenAssistant/oasst1
- anon8231489123/ShareGPT_Vicuna_unfiltered
language:
- de
- en
---
# EinfachAlex Model Card
Diese Model Card dient als Vorlage für neue Modelle. Sie wurde mithilfe [dieser Rohvorlage](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1) generiert.
## Modell Details
### Modellbeschreibung
Diese Model Card beschreibt das EinfachAlex-Modell. Es handelt sich um ein NLP-Modell, das entwickelt wurde, um XYZ-Aufgaben zu lösen.
- **Entwickelt von:** EinfachAlex-Team
- **Modelltyp:** XYZ-Modell
- **Sprache(n) (NLP):** Deutsch,Englisch
- **Lizenz:** Unlicense
- **Feinabstimmung von Modell [optional]:** Basierend auf XYZ-Modell
### Modellquellen [optional]
- **Repository:** [GitHub-Repository](https://github.com/einfachalex/model)
- **Paper [optional]:** [XYZ-Paper](https://example.com/xyz_paper)
- **Demo [optional]:** [XYZ-Demo](https://example.com/xyz_demo)
## Verwendungsmöglichkeiten
### Direkte Verwendung
Das EinfachAlex-Modell kann direkt für XYZ-Aufgaben verwendet werden.
### Verwendung in übergeordneten Anwendungen [optional]
Das Modell kann in größere Anwendungen und Systeme integriert werden, um XYZ-Funktionalitäten bereitzustellen.
### Verwendungsbereiche, für die das Modell ungeeignet ist
Das Modell ist möglicherweise nicht geeignet für ABC-Aufgaben aufgrund XYZ-Einschränkungen.
## Bias, Risiken und Einschränkungen
Es wurden bestimmte Verzerrungen, Risiken und Einschränkungen im EinfachAlex-Modell identifiziert.
- XYZ-Verzerrung: [Weitere Informationen erforderlich]
- XYZ-Risiken: [Weitere Informationen erforderlich]
- XYZ-Einschränkungen: [Weitere Informationen erforderlich]
### Empfehlungen
Benutzer sollten die Verzerrungen, Risiken und Einschränkungen des Modells berücksichtigen und geeignete Maßnahmen ergreifen, um diese anzugehen. Weitere Informationen sind erforderlich, um detaillierte Empfehlungen zu geben.
## Einstieg in die Verwendung des Modells
Verwenden Sie den folgenden Code, um mit dem EinfachAlex-Modell zu starten:
```python
import einfachalex
model = einfachalex.EinfachAlexModel()
# Führen Sie weitere Schritte aus... |
AeroAlly/ppo-Lunalander-v2 | AeroAlly | 2023-05-28T14:14:21Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T14:14:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.78 +/- 19.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RajuKandasamy/ponniyinselvan_1.4b_alpha | RajuKandasamy | 2023-05-28T13:18:56Z | 14 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neox",
"text-generation",
"ta",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-28T12:47:58Z | ---
license: apache-2.0
language:
- ta
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is trained on PonniyinSelvan tamil corpus dataset.
## Model Details
Base model used is EleutherAI's Pythia 1.4b
### Model Description
- **Finetuned from model [optional]:** Pythia 1.4b
## Uses
Purely education and research purposes only. Not fit for any kind of practical use.
## Bias, Risks, and Limitations
The base model Bias, Risks and Limitations apply
## How to Get Started with the Model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "RajuKandasamy/ponniyinselvan_1.4b_alpha"
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(model_path, load_in_8bit=False).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()
prompt="""வந்தியத்தேவன்"""
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
attention_mask = torch.ones_like(input_ids).to(model.device)
print("Thinking ...\n ")
with torch.no_grad():
output = model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=256, early_stopping=False, temperature=0.9, top_p=0.9,top_k=500, do_sample=True,output_scores=True, pad_token_id=tokenizer.eos_token_id, repetition_penalty=1.2,eos_token_id=tokenizer.eos_token_id)
output_str = tokenizer.decode(output[0], skip_special_tokens=False)
print(output_str)
```
## Training Details
10 epochs
### Training Data
ponniyinselvan text corpus
### Training Procedure
Casual Language Modelling, With custom BPE tokenizer
|
Kevin8093/pokemon-lora | Kevin8093 | 2023-05-28T12:57:03Z | 5 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-28T07:55:10Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Kevin8093/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
dsnhefwcbwifbvwbgvebedw/li | dsnhefwcbwifbvwbgvebedw | 2023-05-28T12:44:49Z | 0 | 0 | null | [
"zh",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] | null | 2023-05-28T12:42:03Z | ---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- zh
metrics:
- accuracy
--- |
koorukuroo/KcELECTRA_base_beep | koorukuroo | 2023-05-28T12:02:33Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-08-06T10:20:25Z | ---
license: mit
---
BEEP! 데이터셋으로 Epoch 10으로 파인튜닝하여 결과를 살펴보겠습니다.
| | Loss | Acc | Prec | Rec | F1 |
|-----|------|-------|------|-------|-------|
|TRAIN| 0.11 | 0.965 | 0.966| 0.972 | 0.969 |
| VAL | 0.73 | 0.807 | 0.947| 0.749 | 0.837 |
threshold 0.5 기준으로 구분하였을 때, dev 데이터셋에 대한 정확도는 0.85 입니다.
그리고 임베딩 결과물을 t-SNE로 시각화하여보았습니다.
https://v5.core.today/notebook/34XX0RYM4#KcELECTRA_base_beep.ipynb
```python
model = Model.load_from_checkpoint(latest_ckpt);
def infer(x):
return torch.softmax(
model(**model.tokenizer(x, return_tensors='pt')
).logits, dim=-1)
```
```
infer('송중기 시대극은 믿고본다. 첫회 신선하고 좋았다.')
```
```
tensor([[0.7414, 0.2586]], grad_fn=<SoftmaxBackward>)
```
```
infer('유이 자연스러워진 연기')
```
```
tensor([[0.7627, 0.2373]], grad_fn=<SoftmaxBackward>)
``` |
schibfab/landscape_classification_vgg16_fine_tuned-v1 | schibfab | 2023-05-28T11:53:07Z | 1 | 0 | tf-keras | [
"tf-keras",
"image-classification",
"region:us"
] | image-classification | 2023-05-28T10:06:41Z | ---
pipeline_tag: image-classification
--- |
kyo-takano/open-calm-7b-8bit | kyo-takano | 2023-05-28T11:41:05Z | 11 | 10 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"japanese",
"causal-lm",
"quantized",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | text-generation | 2023-05-28T10:22:16Z | ---
license: cc-by-sa-4.0
language:
- ja
tags:
- japanese
- causal-lm
- quantized
inference: false
---
# OpenCALM-7B - 8bit
[](https://colab.research.google.com/gist/kyo-takano/0c7bf0479158aa137e0ba935dec70461/opencalm-7b-8bit.ipynb)
8-bit quantized version of [OpenCALM-7B by CyberAgent (under CC BY-SA 4.0)](https://huggingface.co/cyberagent/open-calm-7b)
When using this quantized model, please be sure to give credit to the original.
## Setup
```sh
pip install -q -U bitsandbytes
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/accelerate.git
```
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
MODEL_ID = "kyo-takano/open-calm-7b-8bit"
model = AutoModelForCausalLM.from_pretrained(MODEL_ID)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
inputs = tokenizer("AIによって私達の暮らしは、", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Model Details
- Developed by: CyberAgent, Inc.
- Quantized by: Kyo Takano
- Model type: Transformer-based Language Model
- Language: Japanese
- Library: GPT-NeoX
- License: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). When using this model, please provide appropriate credit to **CyberAgent, Inc.**
|
Gamabumba/Taxi-v3 | Gamabumba | 2023-05-28T11:37:43Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T11:36:48Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Gamabumba/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GadH/Monit | GadH | 2023-05-28T11:34:59Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T11:34:55Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Monit
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GadH/Monit", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Gamabumba/q-FrozenLake-v1-4x4-noSlippery | Gamabumba | 2023-05-28T11:22:05Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T11:22:00Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Gamabumba/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
YCHuang2112/q-Taxi-v3 | YCHuang2112 | 2023-05-28T11:17:09Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T11:17:05Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="YCHuang2112/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
art3mis0970/moon | art3mis0970 | 2023-05-28T11:14:12Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T11:14:12Z | ---
license: creativeml-openrail-m
---
|
NightHaven/NabNab | NightHaven | 2023-05-28T10:52:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T10:51:54Z | ---
license: creativeml-openrail-m
---
|
glitchyordis/LunarLander-v2 | glitchyordis | 2023-05-28T10:28:30Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T10:16:37Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -113.79 +/- 36.26
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'env_id': 'LunarLander-v2'
'learning_rate': 0.00025
'seed': 1
'total_timesteps': 100000
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'glitchyordis/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
twlm/tw-pythia-6.9b-chat-v0_2 | twlm | 2023-05-28T10:26:44Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"zh",
"en",
"dataset:zetavg/ShareGPT-Processed",
"dataset:zetavg/coct-en-zh-tw-translations-twp-300k",
"dataset:zetavg/zh-tw-wikipedia",
"dataset:zetavg/tw-sinica-corpus-word-frequency",
"dataset:RyokoAI/ShareGPT52K",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-24T19:40:34Z | ---
datasets:
- zetavg/ShareGPT-Processed
- zetavg/coct-en-zh-tw-translations-twp-300k
- zetavg/zh-tw-wikipedia
- zetavg/tw-sinica-corpus-word-frequency
- RyokoAI/ShareGPT52K
language:
- zh
- en
---
# TW-Pythia-6.9B-Chat
**Taiwanese Mandarin Pythia Language Model, instruction-tuned for dialogue.**
Version 0.2
## Model Details
The TW-Pythia model is derived from the Apache-2.0-licenced [Pythia](https://github.com/EleutherAI/pythia) language model, with 8000 new Traditional Chinese tokens added, embed layers resized and re-trained.
### Basics
- **Developed by:** [@zetavg](https://github.com/zetavg) based on [EleutherAI](https://www.eleuther.ai/)'s [Pythia](https://github.com/EleutherAI/pythia) language model.
- **Model type:** Transformer-based GPT-NeoX Causal Language Model
- **Languages:** English, Traditional Chinese
- **License:** Unknown due to unconfirmed usage license of the training data
- **Derived from model:** [EleutherAI/pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b)
### Model Sources
- **Repository:** https://github.com/zetavg/twlm
- **Demo:** See https://hackmd.io/@z/twlm-demo
## Uses
Currently, this model has not demonstrated any practical value in Traditional Chinese processing without further training, but it does possess some basic Chinese-English translation capabilities.
## Training Details
### Training Data
* 200k [English ↔ Traditional Chinese Sentences from the COCT Database](zetavg/coct-en-zh-tw-translations-twp-300k).
* ~8k English and Traditional Chinese mixed [ShareGPT data](zetavg/ShareGPT-Processed).
### Training Procedure
First, we build a BPE tokenizer based on the original Pythia tokenizer with 8000 new Traditional Chinese tokens added.
Then, we resize the embedding layer of the `pythia-6.9b` model to accommodate the new vocabulary size, and we train only the input/output embedding layers to allow the model to learn the new Traditional Chinese words and phrases.
At last, LoRA weights are added to the model and fine-tuned for instruction following.
#### Training Hyperparameters
- **Training regime:** `fp32`
- See: https://github.com/zetavg/twlm/blob/main/configs/ta01_p7b.yaml
### Hardware
* 1xH100 80GB GPU on Lambda Cloud (with Skypilot), about 20h in total. |
YCHuang2112/q-FrozenLake-v1-8x8-Slippery | YCHuang2112 | 2023-05-28T10:18:32Z | 0 | 0 | null | [
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T03:21:27Z | ---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.60 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="YCHuang2112/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
krasnova/sde-church-fine-tuned-van-gogh-256 | krasnova | 2023-05-28T09:43:36Z | 11 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:ScoreSdeVePipeline",
"region:us"
] | unconditional-image-generation | 2023-05-28T09:43:28Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('krasnova/sde-church-fine-tuned-van-gogh-256')
image = pipeline().images[0]
image
```
|
AustinCarthy/Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75_noaddedB | AustinCarthy | 2023-05-28T09:09:27Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-05-28T02:10:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75_noaddedB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_100KP_BFall_fromB_40KGen_topP_0.75_noaddedB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benign_40K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0177
- Accuracy: 0.9976
- F1: 0.9746
- Precision: 0.9994
- Recall: 0.951
- Roc Auc Score: 0.9755
- Tpr At Fpr 0.01: 0.965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0034 | 1.0 | 66875 | 0.0124 | 0.9970 | 0.9680 | 0.9926 | 0.9446 | 0.9721 | 0.9258 |
| 0.0015 | 2.0 | 133750 | 0.0227 | 0.9969 | 0.9667 | 0.9974 | 0.9378 | 0.9688 | 0.9346 |
| 0.0011 | 3.0 | 200625 | 0.0224 | 0.9969 | 0.9669 | 0.9991 | 0.9366 | 0.9683 | 0.9476 |
| 0.0005 | 4.0 | 267500 | 0.0200 | 0.9975 | 0.9731 | 0.9992 | 0.9484 | 0.9742 | 0.9618 |
| 0.0006 | 5.0 | 334375 | 0.0177 | 0.9976 | 0.9746 | 0.9994 | 0.951 | 0.9755 | 0.965 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
soBeauty/xlm-roberta-base-KFoldSukhoThaiOnly-mlm-20230524 | soBeauty | 2023-05-28T08:59:37Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-05-27T14:22:32Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-KFoldSukhoThaiOnly-mlm-20230524
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-KFoldSukhoThaiOnly-mlm-20230524
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rahulmishra/Binary-Classification | rahulmishra | 2023-05-28T08:57:06Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-28T08:43:37Z | title: Binary-Classification
emoji: ⚡
colorFrom: purple
colorTo: yellow
sdk: gradio
sdk_version: 3.32.0
app_file: app.py
pinned: false |
eVaggelia/myNewModel_ | eVaggelia | 2023-05-28T08:38:14Z | 61 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-07T13:45:47Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: myNewModel_
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# myNewModel_
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 9.4364
- Validation Loss: 9.1201
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -964, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2014 | 9.9498 | 0 |
| 9.8396 | 9.4625 | 1 |
| 9.4364 | 9.1201 | 2 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
DAMO-NLP-SG/zero-shot-classify-SSTuning-large | DAMO-NLP-SG | 2023-05-28T08:33:42Z | 344 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"Zero-Shot Classification",
"zero-shot-classification",
"arxiv:2305.11442",
"license:mit",
"autotrain_compatible",
"region:us"
] | zero-shot-classification | 2023-05-19T22:57:21Z | ---
inference: false
license: mit
tags:
- Zero-Shot Classification
pipeline_tag: zero-shot-classification
---
# Zero-shot text classification (large-sized model) trained with self-supervised tuning
Zero-shot text classification model trained with self-supervised tuning (SSTuning).
It was introduced in the paper [Zero-Shot Text Classification via Self-Supervised Tuning](https://arxiv.org/abs/2305.11442) by
Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing
and first released in [this repository](https://github.com/DAMO-NLP-SG/SSTuning).
The model backbone is RoBERTa-large.
## Model description
The model is tuned with unlabeled data using a learning objective called first sentence prediction (FSP).
The FSP task is designed by considering both the nature of the unlabeled corpus and the input/output format of classification tasks.
The training and validation sets are constructed from the unlabeled corpus using FSP.
During tuning, BERT-like pre-trained masked language
models such as RoBERTa and ALBERT are employed as the backbone, and an output layer for classification is added.
The learning objective for FSP is to predict the index of the correct label.
A cross-entropy loss is used for tuning the model.
## Model variations
There are three versions of models released. The details are:
| Model | Backbone | #params | accuracy | Speed | #Training data
|------------|-----------|----------|-------|-------|----|
| [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | Low | High | 20.48M |
| [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | Medium | Medium | 5.12M |
| [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | High | Low| 5.12M |
Please note that zero-shot-classify-SSTuning-base is trained with more data (20.48M) than the paper, as this will increase the accuracy.
## Intended uses & limitations
The model can be used for zero-shot text classification such as sentiment analysis and topic classification. No further finetuning is needed.
The number of labels should be 2 ~ 20.
### How to use
You can try the model with the Colab [Notebook](https://colab.research.google.com/drive/17bqc8cXFF-wDmZ0o8j7sbrQB9Cq7Gowr?usp=sharing).
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch, string, random
tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/zero-shot-classify-SSTuning-large")
model = AutoModelForSequenceClassification.from_pretrained("DAMO-NLP-SG/zero-shot-classify-SSTuning-large")
text = "I love this place! The food is always so fresh and delicious."
list_label = ["negative", "positive"]
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
list_ABC = [x for x in string.ascii_uppercase]
def check_text(model, text, list_label, shuffle=False):
list_label = [x+'.' if x[-1] != '.' else x for x in list_label]
list_label_new = list_label + [tokenizer.pad_token]* (20 - len(list_label))
if shuffle:
random.shuffle(list_label_new)
s_option = ' '.join(['('+list_ABC[i]+') '+list_label_new[i] for i in range(len(list_label_new))])
text = f'{s_option} {tokenizer.sep_token} {text}'
model.to(device).eval()
encoding = tokenizer([text],truncation=True, max_length=512,return_tensors='pt')
item = {key: val.to(device) for key, val in encoding.items()}
logits = model(**item).logits
logits = logits if shuffle else logits[:,0:len(list_label)]
probs = torch.nn.functional.softmax(logits, dim = -1).tolist()
predictions = torch.argmax(logits, dim=-1).item()
probabilities = [round(x,5) for x in probs[0]]
print(f'prediction: {predictions} => ({list_ABC[predictions]}) {list_label_new[predictions]}')
print(f'probability: {round(probabilities[predictions]*100,2)}%')
check_text(model, text, list_label)
# prediction: 1 => (B) positive.
# probability: 99.84%
```
### BibTeX entry and citation info
```bibtxt
@inproceedings{acl23/SSTuning,
author = {Chaoqun Liu and
Wenxuan Zhang and
Guizhen Chen and
Xiaobao Wu and
Anh Tuan Luu and
Chip Hong Chang and
Lidong Bing},
title = {Zero-Shot Text Classification via Self-Supervised Tuning},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2023},
year = {2023},
url = {https://arxiv.org/abs/2305.11442},
}
``` |
bagassword21/maudygoon | bagassword21 | 2023-05-28T08:30:37Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T08:29:49Z | ---
license: creativeml-openrail-m
---
|
eVaggelia/myNewModel | eVaggelia | 2023-05-28T08:18:15Z | 60 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-07T12:19:30Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: eVaggelia/myNewModel
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# eVaggelia/myNewModel
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.2014
- Validation Loss: 9.9498
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -964, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2014 | 9.9498 | 0 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
shamiulshifat/ppo-Huggy | shamiulshifat | 2023-05-28T07:39:32Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-05-28T07:39:24Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: shamiulshifat/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MetaIX/GPT4-X-Alpasta-30b-4bit | MetaIX | 2023-05-28T06:46:44Z | 1,484 | 68 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-27T04:55:33Z | <p><strong><font size="5">Information</font></strong></p>
GPT4-X-Alpasta-30b working with Oobabooga's Text Generation Webui and KoboldAI.
<p>This is an attempt at improving Open Assistant's performance as an instruct while retaining its excellent prose. The merge consists of <a href="https://huggingface.co/chansung/gpt4-alpaca-lora-30b">Chansung's GPT4-Alpaca Lora</a> and <a href="https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor">Open Assistant's native fine-tune</a>.</p>
<p><strong><font size="5">Update 05.27.2023</font></strong></p>
<p>Updated the ggml quantizations to be compatible with the latest version of llamacpp (again).</p>
<p><strong>What's included</strong></p>
<P>GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations.</P>
<P>GGML: 3 quantized versions. One quantized using q4_1, another was quantized using q5_0, and the last one was quantized using q5_1.</P>
<p><strong>GPU/GPTQ Usage</strong></p>
<p>To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.</p>
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md">here</a> and <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/LLaMA-model.md">here</a></p>
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/0cc4m/KoboldAI">here</a></p>
<p><strong>CPU/GGML Usage</strong></p>
<p>To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.</p>
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md">here</a></p>
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/LostRuins/koboldcpp">here</a></p>
<p><strong><font size="5">Benchmarks</font></strong></p>
<p><strong><font size="4">--true-sequential --act-order</font></strong></p>
<strong>Wikitext2</strong>: 4.998758792877197
<strong>Ptb-New</strong>: 9.802155494689941
<strong>C4-New</strong>: 7.341384410858154
<strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.
<p><strong><font size="4">--true-sequential --groupsize 128</font></strong></p>
<strong>Wikitext2</strong>: 4.70257568359375
<strong>Ptb-New</strong>: 9.323467254638672
<strong>C4-New</strong>: 7.041860580444336
<strong>Note</strong>: This version uses <i>--groupsize 128</i>, resulting in better evaluations. However, it consumes more VRAM. |
jason1234/arzington_bert_embedding_model_v2 | jason1234 | 2023-05-28T06:46:16Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-05-28T06:32:59Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 256 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 410,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
HasinMDG/distilroberta_SD_government_v2 | HasinMDG | 2023-05-28T06:23:03Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-28T06:22:52Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/distilroberta_SD_government_v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/distilroberta_SD_government_v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
FaizanMunsaf/t5-squad-v1 | FaizanMunsaf | 2023-05-28T06:10:59Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-17T07:03:40Z | ---
license: other
---
This Model is trained by T5 model and use for question generation. This model is trained across 80,000 datasets field and it's help and give you better result for your need.
If you need any sort of help for this you can ask me. This Model is use for the Question Generation
Thanks for choosing my t5 model
|
torreygooch/ppo-Huggy3 | torreygooch | 2023-05-28T05:53:04Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-05-28T05:46:48Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: torreygooch/ppo-Huggy3
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YakovElm/Hyperledger10Classic_512 | YakovElm | 2023-05-28T05:45:24Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-28T05:44:46Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger10Classic_512
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger10Classic_512
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2833
- Train Accuracy: 0.8900
- Validation Loss: 0.3935
- Validation Accuracy: 0.8610
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3645 | 0.8731 | 0.3704 | 0.8600 | 0 |
| 0.3302 | 0.8838 | 0.3660 | 0.8600 | 1 |
| 0.2833 | 0.8900 | 0.3935 | 0.8610 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Oxygene-Monitor/twitter-roberta-base-sentiment-latest | Oxygene-Monitor | 2023-05-28T05:45:10Z | 6 | 0 | null | [
"pytorch",
"tf",
"roberta",
"en",
"dataset:tweet_eval",
"arxiv:2202.03829",
"region:us"
] | null | 2024-09-30T19:13:57Z | ---
language: en
widget:
- text: Covid cases are increasing fast!
datasets:
- tweet_eval
---
# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022)
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark.
The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive
This sentiment analysis model has been integrated into [TweetNLP](https://github.com/cardiffnlp/tweetnlp). You can access the demo [here](https://tweetnlp.org).
## Example Pipeline
```python
from transformers import pipeline
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("Covid cases are increasing fast!")
```
```
[{'label': 'Negative', 'score': 0.7236}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"cardiffnlp/twitter-roberta-base-sentiment-latest"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
#model.save_pretrained(MODEL)
text = "Covid cases are increasing fast!"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Covid cases are increasing fast!"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) Negative 0.7236
2) Neutral 0.2287
3) Positive 0.0477
```
### References
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title = "{T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media",
author = "Camacho-collados, Jose and
Rezaee, Kiamehr and
Riahi, Talayeh and
Ushio, Asahi and
Loureiro, Daniel and
Antypas, Dimosthenis and
Boisson, Joanne and
Espinosa Anke, Luis and
Liu, Fangyu and
Mart{\'\i}nez C{\'a}mara, Eugenio" and others,
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2022",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-demos.5",
pages = "38--49"
}
```
```
@inproceedings{loureiro-etal-2022-timelms,
title = "{T}ime{LM}s: Diachronic Language Models from {T}witter",
author = "Loureiro, Daniel and
Barbieri, Francesco and
Neves, Leonardo and
Espinosa Anke, Luis and
Camacho-collados, Jose",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-demo.25",
doi = "10.18653/v1/2022.acl-demo.25",
pages = "251--260"
}
```
|
Johnhex/Clam1.3 | Johnhex | 2023-05-28T05:44:23Z | 2 | 2 | diffusers | [
"diffusers",
"stable duffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-28T05:41:37Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable duffusion
--- |
amjadfqs/swin-base-patch4-window7-224-in22k-finetuned-brain-tumor-final_09 | amjadfqs | 2023-05-28T05:36:23Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-26T22:34:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
model-index:
- name: swin-base-patch4-window7-224-in22k-finetuned-brain-tumor-final_09
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9076983503534957
- name: Precision
type: precision
value: 0.9184297970931635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-in22k-finetuned-brain-tumor-final_09
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2586
- Accuracy: 0.9077
- F1 Score: 0.9093
- Precision: 0.9184
- Sensitivity: 0.9071
- Specificity: 0.9766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Precision | Sensitivity | Specificity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:-----------:|:-----------:|
| 1.4243 | 0.99 | 19 | 1.2818 | 0.4124 | 0.3570 | 0.4910 | 0.4019 | 0.8403 |
| 1.1046 | 1.97 | 38 | 0.8873 | 0.6658 | 0.6584 | 0.7235 | 0.6608 | 0.9117 |
| 0.5232 | 2.96 | 57 | 0.5753 | 0.7671 | 0.7654 | 0.8063 | 0.7631 | 0.9395 |
| 0.3235 | 4.0 | 77 | 0.4476 | 0.8256 | 0.8272 | 0.8496 | 0.8228 | 0.9549 |
| 0.2586 | 4.99 | 96 | 0.3886 | 0.8590 | 0.8608 | 0.8764 | 0.8567 | 0.9638 |
| 0.1986 | 5.97 | 115 | 0.3538 | 0.8641 | 0.8663 | 0.8816 | 0.8624 | 0.9652 |
| 0.166 | 6.96 | 134 | 0.3543 | 0.8649 | 0.8668 | 0.8849 | 0.8637 | 0.9655 |
| 0.1345 | 8.0 | 154 | 0.3729 | 0.8586 | 0.8610 | 0.8837 | 0.8571 | 0.9640 |
| 0.1197 | 8.99 | 173 | 0.2879 | 0.8975 | 0.8987 | 0.9098 | 0.8961 | 0.9740 |
| 0.1033 | 9.97 | 192 | 0.2810 | 0.8998 | 0.9013 | 0.9128 | 0.8983 | 0.9746 |
| 0.0957 | 10.96 | 211 | 0.3239 | 0.8802 | 0.8818 | 0.8988 | 0.8795 | 0.9696 |
| 0.085 | 12.0 | 231 | 0.2586 | 0.9077 | 0.9093 | 0.9184 | 0.9071 | 0.9766 |
| 0.0769 | 12.99 | 250 | 0.2662 | 0.9018 | 0.9036 | 0.9149 | 0.9011 | 0.9751 |
| 0.0758 | 13.97 | 269 | 0.2830 | 0.8951 | 0.8970 | 0.9102 | 0.8945 | 0.9734 |
| 0.068 | 14.96 | 288 | 0.2757 | 0.8967 | 0.8986 | 0.9113 | 0.8960 | 0.9738 |
| 0.0641 | 16.0 | 308 | 0.2743 | 0.8991 | 0.9008 | 0.9136 | 0.8984 | 0.9744 |
| 0.0623 | 16.99 | 327 | 0.2713 | 0.8987 | 0.9001 | 0.9127 | 0.8982 | 0.9743 |
| 0.0542 | 17.97 | 346 | 0.2650 | 0.8987 | 0.9005 | 0.9128 | 0.8980 | 0.9743 |
| 0.0573 | 18.96 | 365 | 0.2709 | 0.8963 | 0.8981 | 0.9112 | 0.8957 | 0.9737 |
| 0.058 | 19.74 | 380 | 0.2778 | 0.8947 | 0.8965 | 0.9101 | 0.8942 | 0.9733 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
HasinMDG/MLM_distilroberta_SD_company | HasinMDG | 2023-05-28T05:30:18Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-28T05:30:06Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/MLM_distilroberta_SD_company
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/MLM_distilroberta_SD_company")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
HasinMDG/MLM_distilroberta_SD_government | HasinMDG | 2023-05-28T05:21:27Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-28T05:21:15Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/MLM_distilroberta_SD_government
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/MLM_distilroberta_SD_government")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
firqaaa/indo-alpaca-lora-7b | firqaaa | 2023-05-28T05:09:14Z | 0 | 2 | transformers | [
"transformers",
"llama",
"alpaca",
"lora",
"text-generation",
"id",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-29T02:56:09Z | ---
language:
- id
pipeline_tag: text-generation
license: cc-by-nc-4.0
library_name: transformers
tags:
- llama
- alpaca
- lora
---
# About :
This 🦙 Llama model was trained on a translated Alpaca dataset in Bahasa Indonesia. It uses Parameter Efficient Fine Tuning and LoRA to enable training on consumer-grade GPU hardware.
# How to Use :
## Load the 🦙 Alpaca-LoRA model
```python
import torch
import bitsandbytes as bnb
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
from peft import PeftModel, PeftConfig, prepare_model_for_int8_training, LoraConfig, get_peft_model
peft_model_id = "firqaaa/indo-Alpaca-LoRA-7b"
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto")
# Load the LoRA model
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Prompt Template
Prepare the prompt template
```python
instruction = "Tuliskan deret bilangan fibbonaci. Tulis jawaban/respons dalam Bahasa Indonesia."
PROMPT = f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:"""
```
## Evaluation
feel free to change the parameters inside `GenerationConfig` to get better result.
```python
inputs = tokenizer(
PROMPT,
return_tensors="pt"
)
input_ids = inputs["input_ids"].cuda()
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.95,
top_k=40,
num_beams=4,
repetition_penalty=1.15,
)
print("Generating...")
print("Instruction : {}".format(instruction))
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=512,
)
print("Response : ")
for s in generation_output.sequences:
print(tokenizer.decode(s).split("### Response:")[1])
```
## Note :
Due to the high loss and lack of compute unit, we will update this model frequently to ensure the quality of generated text |
cardiffnlp/twitter-roberta-base-emotion | cardiffnlp | 2023-05-28T05:08:00Z | 282,970 | 42 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # Twitter-roBERTa-base for Emotion Recognition
This is a RoBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
<b>New!</b> We just released a new emotion recognition model trained with more emotion types and with a newer RoBERTa-based model.
See [twitter-roberta-base-emotion-multilabel-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion-multilabel-latest) and [TweetNLP](https://github.com/cardiffnlp/tweetnlp) for more details.
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='emotion'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Celebrating my promotion 😎"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Celebrating my promotion 😎"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) joy 0.9382
2) optimism 0.0362
3) anger 0.0145
4) sadness 0.0112
```
|
qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication | qbao775 | 2023-05-28T05:01:05Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"logical-reasoning",
"logical-equivalence",
"constrastive-learning",
"en",
"arxiv:2305.12599",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-16T09:36:22Z | ---
license: mit
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- logical-reasoning
- logical-equivalence
- constrastive-learning
---
# AMR-LE
This is a branch which includes the model weight for AMR-LE. AMR-LE is a model that been fine-tuned on AMR-based logic-driven augmented data. The data is formed as `(original sentence, logical equivalence sentence, logical inequivalence sentence)`. We use Abstract Meaning Representation (AMR) to automatically construct logical equivalence and logical inequivalence sentences. We use constrastive learning to train the model to learn to identify whether two sentences are logically equivalent or logically inequivalent. You are welcome to fine-tune the model weights on the dowstream tasks as logical reasoning reading comprehension tasks (ReClor and LogiQA) and natural language inference tasks (MNLI, MRPC, QNLI, RTE and QQP). We achieved #2 on the ReClor Leaderboard.
Here is the original links for AMR-LE including paper, project and leaderboard.
Paper: https://arxiv.org/abs/2305.12599
Project: https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning
Leaderboard: https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347
In this repository, we trained the DeBERTa-V2-XXLarge on the sentence pair constructed by our AMR-LE. We use AMR with three logical equivalence laws `(Contraposition law, Double negation law, Implication law)` to construct three different logical equivalence/inequivalence sentences.
## How to interact model in this web page?
Some test examples that you may copy and paste them into the right side user input area.
The expected answer for the following example is they are logically inequivalent which is 0. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is false.
```
If Alice is happy, then Bob is smart.
If Alice is not happy, then Bob is smart.
```
The expected answer for the following example is they are logically equivalent which is 1. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is true.
```
If Alice is happy, then Bob is smart.
If Bob is not smart, then Alice is not happy.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use double negation law `(A <=> not not A)` to show that following example is false.
```
Alice is happy.
Alice is not happy.
```
The expected answer for the following example is they are logically equivalent which is 1. Use double negation law `(A <=> not not A)` to show that following example is true.
```
Alice is happy.
Alice is not sad.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use implication law `(If A then B <=> not A or B)` to show that following example is false. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is kind or Bob is clever.
```
The expected answer for the following example is they are logically equivalent which is 1. Use implication law `(If A then B <=> not A or B)` to show that following example is true. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is not kind or Bob is clever.
```
## How to load the model weight?
```
from transformers import AutoModel
model = AutoModel.from_pretrained("qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication")
```
## Citation
```
@article{bao2023contrastive,
title={Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text},
author={Bao, Qiming and Peng, Alex Yuxuan and Deng, Zhenyun and Zhong, Wanjun and Tan, Neset and Young, Nathan and Chen, Yang and Zhu, Yonghua and Witbrock, Michael and Liu, Jiamou},
journal={arXiv preprint arXiv:2305.12599},
year={2023}
}
``` |
qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication-Commutative-Pos-Neg-1-3 | qbao775 | 2023-05-28T04:58:12Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"logical-reasoning",
"logical-equivalence",
"constrastive-learning",
"en",
"arxiv:2305.12599",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-16T09:38:22Z | ---
license: mit
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- logical-reasoning
- logical-equivalence
- constrastive-learning
---
# AMR-LE
This is a branch which includes the model weight for AMR-LE. AMR-LE is a model that been fine-tuned on AMR-based logic-driven augmented data. The data is formed as `(original sentence, logical equivalence sentence, logical inequivalence sentence)`. We use Abstract Meaning Representation (AMR) to automatically construct logical equivalence and logical inequivalence sentences. We use constrastive learning to train the model to learn to identify whether two sentences are logically equivalent or logically inequivalent. You are welcome to fine-tune the model weights on the dowstream tasks as logical reasoning reading comprehension tasks (ReClor and LogiQA) and natural language inference tasks (MNLI, MRPC, QNLI, RTE and QQP). We achieved #2 on the ReClor Leaderboard.
Here is the original links for AMR-LE including paper, project and leaderboard.
Paper: https://arxiv.org/abs/2305.12599
Project: https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning
Leaderboard: https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347
In this repository, we upload the model weight which has been trained on the dataset that has the ratio of positive sample and negative sample as 1 and 3. We use AMR with four logical equivalence laws `(Contraposition law, Commutative law, Implication law, Double negation law)` to construct four different logical equivalence/inequivalence sentences.
## How to interact model in this web page?
Some test examples that you may copy and paste them into the right side user input area.
The expected answer for the following example is they are logically inequivalent which is 0. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is false.
```
If Alice is happy, then Bob is smart.
If Alice is not happy, then Bob is smart.
```
The expected answer for the following example is they are logically equivalent which is 1. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is true.
```
If Alice is happy, then Bob is smart.
If Bob is not smart, then Alice is not happy.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use double negation law `(A <=> not not A)` to show that following example is false.
```
Alice is happy.
Alice is not happy.
```
The expected answer for the following example is they are logically equivalent which is 1. Use double negation law `(A <=> not not A)` to show that following example is true.
```
Alice is happy.
Alice is not sad.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use implication law `(If A then B <=> not A or B)` to show that following example is false. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is kind or Bob is clever.
```
The expected answer for the following example is they are logically equivalent which is 1. Use implication law `(If A then B <=> not A or B)` to show that following example is true. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is not kind or Bob is clever.
```
The expected answer for the following example is they are logically inequivalent which is 0. Use commutative law `(A and B <=> B and A)` to show that following example is false.
```
The bald eagle is clever and the wolf is fierce.
The wolf is not fierce and the bald eagle is not clever.
```
The expected answer for the following example is they are logically equivalent which is 1. Use commutative law `(A and B <=> B and A)` to show that following example is true.
```
The bald eagle is clever and the wolf is fierce.
The wolf is fierce and the bald eagle is clever.
```
## How to load the model weight?
```
from transformers import AutoModel
model = AutoModel.from_pretrained("qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication-Commutative-Pos-Neg-1-3")
```
## Citation
```
@article{bao2023contrastive,
title={Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text},
author={Bao, Qiming and Peng, Alex Yuxuan and Deng, Zhenyun and Zhong, Wanjun and Tan, Neset and Young, Nathan and Chen, Yang and Zhu, Yonghua and Witbrock, Michael and Liu, Jiamou},
journal={arXiv preprint arXiv:2305.12599},
year={2023}
}
``` |
cardiffnlp/tweet-topic-21-multi | cardiffnlp | 2023-05-28T04:56:09Z | 12,632 | 66 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"en",
"dataset:cardiffnlp/tweet_topic_multi",
"arxiv:2209.09824",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-06T14:52:42Z | ---
language: en
widget:
- text: It is great to see athletes promoting awareness for climate change.
datasets:
- cardiffnlp/tweet_topic_multi
license: mit
metrics:
- f1
- accuracy
pipeline_tag: text-classification
---
# tweet-topic-21-multi
This model is based on a [TimeLMs](https://github.com/cardiffnlp/timelms) language model trained on ~124M tweets from January 2018 to December 2021 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m)), and finetuned for multi-label topic classification on a corpus of 11,267 [tweets](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). This model is suitable for English.
- Reference Paper: [TweetTopic](https://arxiv.org/abs/2209.09824) (COLING 2022).
<b>Labels</b>:
| <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> |
|-----------------------------|---------------------|----------------------------|--------------------------|
| 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports |
| 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure |
| 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life |
| 4: family | 9: gaming | 14: relationships | |
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import expit
MODEL = f"cardiffnlp/tweet-topic-21-multi"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label
text = "It is great to see athletes promoting awareness for climate change."
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens)
scores = output[0][0].detach().numpy()
scores = expit(scores)
predictions = (scores >= 0.5) * 1
# TF
#tf_model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
#class_mapping = tf_model.config.id2label
#text = "It is great to see athletes promoting awareness for climate change."
#tokens = tokenizer(text, return_tensors='tf')
#output = tf_model(**tokens)
#scores = output[0][0]
#scores = expit(scores)
#predictions = (scores >= 0.5) * 1
# Map to classes
for i in range(len(predictions)):
if predictions[i]:
print(class_mapping[i])
```
Output:
```
news_&_social_concern
sports
```
### BibTeX entry and citation info
Please cite the [reference paper](https://aclanthology.org/2022.coling-1.299/) if you use this model.
```bibtex
@inproceedings{antypas-etal-2022-twitter,
title = "{T}witter Topic Classification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Silva, Vitor and
Neves, Leonardo and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.299",
pages = "3386--3400"
}
``` |
indigorange/dqn-SpaceInvadersNoFrameskip-v4 | indigorange | 2023-05-28T04:35:59Z | 12 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T04:35:23Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 645.50 +/- 137.41
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga indigorange -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga indigorange -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga indigorange
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ddoc/sdw | ddoc | 2023-05-28T04:29:28Z | 0 | 1 | null | [
"arxiv:2211.06679",
"region:us"
] | null | 2023-05-28T02:56:00Z | # Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
Chudo-chu/SD-t2p-82 | Chudo-chu | 2023-05-28T04:02:06Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-28T04:02:06Z | ---
license: creativeml-openrail-m
---
|
ericalt/a2c-AntBulletEnv-v0 | ericalt | 2023-05-28T03:47:39Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T03:46:33Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1960.91 +/- 50.47
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jonweb37/profitlovy | jonweb37 | 2023-05-28T03:40:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-27T16:49:38Z | ---
license: creativeml-openrail-m
---
|
KaiquanMah/q-FrozenLake-v1-4x4-noSlippery | KaiquanMah | 2023-05-28T03:24:24Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-28T03:24:22Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="KaiquanMah/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits