modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Word2vec/nlpl_71 | Word2vec | 2023-07-04T15:23:04Z | 0 | 0 | null | [
"word2vec",
"ukr",
"dataset:Ukrainian_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T13:11:18Z | ---
language: ukr
license: cc-by-4.0
tags:
- word2vec
datasets: Ukrainian_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 942071 corresponding to 574319117 tokens from the dataset `Ukrainian_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_71", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/71.zip |
Word2vec/nlpl_70 | Word2vec | 2023-07-04T15:22:43Z | 0 | 0 | null | [
"word2vec",
"tur",
"dataset:Turkish_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:51:26Z | ---
language: tur
license: cc-by-4.0
tags:
- word2vec
datasets: Turkish_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 3633786 corresponding to 3668140172 tokens from the dataset `Turkish_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_70", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/70.zip |
Word2vec/nlpl_68 | Word2vec | 2023-07-04T15:22:18Z | 0 | 0 | null | [
"word2vec",
"spa",
"dataset:Spanish_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T13:10:25Z | ---
language: spa
license: cc-by-4.0
tags:
- word2vec
datasets: Spanish_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 2656057 corresponding to 5967877096 tokens from the dataset `Spanish_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_68", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/68.zip |
Word2vec/nlpl_66 | Word2vec | 2023-07-04T15:21:53Z | 0 | 0 | null | [
"word2vec",
"slk",
"dataset:Slovak_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:49:41Z | ---
language: slk
license: cc-by-4.0
tags:
- word2vec
datasets: Slovak_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 1188804 corresponding to 855770850 tokens from the dataset `Slovak_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_66", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/66.zip
|
Word2vec/nlpl_65 | Word2vec | 2023-07-04T15:21:32Z | 0 | 0 | null | [
"word2vec",
"rus",
"dataset:Russian_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:48:27Z | ---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 3338424 corresponding to 3386127535 tokens from the dataset `Russian_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_65", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/65.zip |
Word2vec/nlpl_63 | Word2vec | 2023-07-04T15:21:02Z | 0 | 0 | null | [
"word2vec",
"por",
"dataset:Portuguese_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:46:03Z | ---
language: por
license: cc-by-4.0
tags:
- word2vec
datasets: Portuguese_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 2536452 corresponding to 6173041573 tokens from the dataset `Portuguese_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_63", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/63.zip |
Word2vec/nlpl_61 | Word2vec | 2023-07-04T15:20:33Z | 0 | 0 | null | [
"word2vec",
"fas",
"dataset:Persian_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:43:30Z | ---
language: fas
license: cc-by-4.0
tags:
- word2vec
datasets: Persian_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 966446 corresponding to 1180218836 tokens from the dataset `Persian_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_61", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/61.zip |
Word2vec/nlpl_62 | Word2vec | 2023-07-04T15:20:23Z | 0 | 0 | null | [
"word2vec",
"pol",
"dataset:Polish_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:44:03Z | ---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 4420598 corresponding to 5489171333 tokens from the dataset `Polish_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_62", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/62.zip |
imvladikon/bert-large-cased-finetuned-conll03-english | imvladikon | 2023-07-04T15:18:47Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ```json
{
'epoch': 2.0,
'eval_accuracy': 0.9878289280037675,
'eval_f1': 0.9524406066842648,
'eval_loss': 0.06057225540280342,
'eval_mem_cpu_alloc_delta': 2711552,
'eval_mem_cpu_peaked_delta': 2113536,
'eval_mem_gpu_alloc_delta': 0,
'eval_mem_gpu_peaked_delta': 126590464,
'eval_precision': 0.9499330655957162,
'eval_recall': 0.9549614211376278,
'eval_runtime': 20.9379,
'eval_samples_per_second': 155.221
}
```
|
Word2vec/nlpl_60 | Word2vec | 2023-07-04T15:18:03Z | 0 | 0 | null | [
"word2vec",
"chu",
"dataset:Old_Church_Slavonic_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:34:55Z | ---
language: chu
license: cc-by-4.0
tags:
- word2vec
datasets: Old_Church_Slavonic_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 357 corresponding to 21380 tokens from the dataset `Old_Church_Slavonic_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_60", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/60.zip |
Word2vec/nlpl_56 | Word2vec | 2023-07-04T15:17:20Z | 0 | 0 | null | [
"word2vec",
"lat",
"dataset:Latin_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:33:18Z | ---
language: lat
license: cc-by-4.0
tags:
- word2vec
datasets: Latin_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 555381 corresponding to 256719661 tokens from the dataset `Latin_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_56", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/56.zip |
Word2vec/nlpl_57 | Word2vec | 2023-07-04T15:17:08Z | 0 | 0 | null | [
"word2vec",
"lav",
"dataset:Latvian_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:33:34Z | ---
language: lav
license: cc-by-4.0
tags:
- word2vec
datasets: Latvian_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 560445 corresponding to 289095637 tokens from the dataset `Latvian_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_57", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/57.zip |
Word2vec/nlpl_55 | Word2vec | 2023-07-04T15:16:24Z | 0 | 0 | null | [
"word2vec",
"kor",
"dataset:Korean_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:32:24Z | ---
language: kor
license: cc-by-4.0
tags:
- word2vec
datasets: Korean_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 1780757 corresponding to 551643170 tokens from the dataset `Korean_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_55", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/55.zip
|
Word2vec/nlpl_54 | Word2vec | 2023-07-04T15:16:11Z | 0 | 0 | null | [
"word2vec",
"kaz",
"dataset:Kazakh_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:32:08Z | ---
language: kaz
license: cc-by-4.0
tags:
- word2vec
datasets: Kazakh_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 176643 corresponding to 57048825 tokens from the dataset `Kazakh_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_54", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/54.zip |
Word2vec/nlpl_52 | Word2vec | 2023-07-04T15:15:46Z | 0 | 0 | null | [
"word2vec",
"ita",
"dataset:Italian_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:28:47Z | ---
language: ita
license: cc-by-4.0
tags:
- word2vec
datasets: Italian_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 2469122 corresponding to 5364254134 tokens from the dataset `Italian_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_52", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/52.zip
|
Tommert25/multibertfinetuned0407 | Tommert25 | 2023-07-04T15:15:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-04T10:41:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: multibertfinetuned0407
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multibertfinetuned0407
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4688
- Precision: 0.4879
- Recall: 0.4345
- F1: 0.4597
- Accuracy: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 131 | 0.4688 | 0.4879 | 0.4345 | 0.4597 | 0.8764 |
| No log | 2.0 | 262 | 0.5224 | 0.5400 | 0.4884 | 0.5129 | 0.8777 |
| No log | 3.0 | 393 | 0.5814 | 0.4900 | 0.4900 | 0.4900 | 0.8683 |
| 0.3219 | 4.0 | 524 | 0.6226 | 0.5125 | 0.5069 | 0.5097 | 0.8750 |
| 0.3219 | 5.0 | 655 | 0.6593 | 0.5008 | 0.4977 | 0.4992 | 0.8771 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Word2vec/nlpl_49 | Word2vec | 2023-07-04T15:14:48Z | 0 | 0 | null | [
"word2vec",
"hun",
"dataset:Hungarian_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:20:11Z | ---
language: hun
license: cc-by-4.0
tags:
- word2vec
datasets: Hungarian_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 2702663 corresponding to 1694170960 tokens from the dataset `Hungarian_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_49", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/49.zip |
Word2vec/nlpl_47 | Word2vec | 2023-07-04T15:14:14Z | 0 | 0 | null | [
"word2vec",
"heb",
"dataset:Hebrew_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:19:48Z | ---
language: heb
license: cc-by-4.0
tags:
- word2vec
datasets: Hebrew_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 672384 corresponding to 643272923 tokens from the dataset `Hebrew_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_47", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/47.zip |
Word2vec/nlpl_45 | Word2vec | 2023-07-04T15:13:37Z | 0 | 0 | null | [
"word2vec",
"deu",
"dataset:German_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:17:44Z | ---
language: deu
license: cc-by-4.0
tags:
- word2vec
datasets: German_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 4946997 corresponding to 6298202810 tokens from the dataset `German_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_45", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/45.zip |
Word2vec/nlpl_42 | Word2vec | 2023-07-04T15:12:50Z | 0 | 0 | null | [
"word2vec",
"fin",
"dataset:Finnish_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:16:45Z | ---
language: fin
license: cc-by-4.0
tags:
- word2vec
datasets: Finnish_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 2433286 corresponding to 1052546686 tokens from the dataset `Finnish_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_42", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/42.zip |
Word2vec/nlpl_41 | Word2vec | 2023-07-04T15:12:33Z | 0 | 0 | null | [
"word2vec",
"est",
"dataset:Estonian_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:16:26Z | ---
language: est
license: cc-by-4.0
tags:
- word2vec
datasets: Estonian_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 926795 corresponding to 341986187 tokens from the dataset `Estonian_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_41", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/41.zip |
Word2vec/nlpl_40 | Word2vec | 2023-07-04T15:12:08Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T12:00:54Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 4027169 corresponding to 9974357994 tokens from the dataset `English_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_40", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/40.zip |
fawzyhamdy/autotrain-datadata-72110138863 | fawzyhamdy | 2023-07-04T15:12:08Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:fawzyhamdy/autotrain-data-datadata",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-07-04T13:57:31Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- fawzyhamdy/autotrain-data-datadata
co2_eq_emissions:
emissions: 49.24949877129796
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 72110138863
- CO2 Emissions (in grams): 49.2495
## Validation Metrics
- Loss: 2.501
- Rouge1: 1.345
- Rouge2: 0.000
- RougeL: 1.343
- RougeLsum: 1.365
- Gen Len: 18.982
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/fawzyhamdy/autotrain-datadata-72110138863
``` |
Word2vec/nlpl_38 | Word2vec | 2023-07-04T15:11:38Z | 0 | 0 | null | [
"word2vec",
"dan",
"dataset:Danish_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T11:59:05Z | ---
language: dan
license: cc-by-4.0
tags:
- word2vec
datasets: Danish_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 1655886 corresponding to 1641664057 tokens from the dataset `Danish_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_38", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/38.zip |
Word2vec/nlpl_37 | Word2vec | 2023-07-04T15:11:21Z | 0 | 0 | null | [
"word2vec",
"ces",
"dataset:Czech_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T11:58:12Z | ---
language: ces
license: cc-by-4.0
tags:
- word2vec
datasets: Czech_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 1767815 corresponding to 2113686735 tokens from the dataset `Czech_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_37", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/37.zip |
Word2vec/nlpl_34 | Word2vec | 2023-07-04T15:10:36Z | 0 | 0 | null | [
"word2vec",
"cat",
"dataset:Catalan_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T11:56:51Z | ---
language: cat
license: cc-by-4.0
tags:
- word2vec
datasets: Catalan_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 799020 corresponding to 897648446 tokens from the dataset `Catalan_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_34", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/34.zip |
Word2vec/nlpl_32 | Word2vec | 2023-07-04T15:10:11Z | 0 | 0 | null | [
"word2vec",
"eus",
"dataset:Basque_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T11:56:25Z | ---
language: eus
license: cc-by-4.0
tags:
- word2vec
datasets: Basque_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 426736 corresponding to 164898542 tokens from the dataset `Basque_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_32", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/32.zip |
Word2vec/nlpl_29 | Word2vec | 2023-07-04T15:02:30Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:Gigaword_5th_Edition",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:10:56Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: Gigaword_5th_Edition
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 297790 corresponding to 4815382730 tokens from the dataset `Gigaword_5th_Edition`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 2 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_29", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/29.zip |
Word2vec/nlpl_28 | Word2vec | 2023-07-04T15:02:08Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:Gigaword_5th_Edition",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:10:42Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: Gigaword_5th_Edition
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 209865 corresponding to 4815382730 tokens from the dataset `Gigaword_5th_Edition`.
The model is trained with the following properties: lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_28", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/28.zip |
Word2vec/nlpl_26 | Word2vec | 2023-07-04T15:01:41Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:Gigaword_5th_Edition",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:10:14Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: Gigaword_5th_Edition
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 209512 corresponding to 4815382730 tokens from the dataset `Gigaword_5th_Edition`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_26", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/26.zip |
Word2vec/nlpl_25 | Word2vec | 2023-07-04T15:01:24Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:10:00Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 228671 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: lemmatization and postag with the algorith Global Vectors with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_25", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/25.zip |
Word2vec/nlpl_23 | Word2vec | 2023-07-04T15:00:56Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:09:31Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 228670 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_23", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/23.zip |
Word2vec/nlpl_22 | Word2vec | 2023-07-04T15:00:44Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:09:13Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 291392 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: no lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_22", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/22.zip |
Word2vec/nlpl_20 | Word2vec | 2023-07-04T14:58:47Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:08:35Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 291392 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: no lemmatization and postag with the algorith Global Vectors with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_20", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/20.zip |
Word2vec/nlpl_19 | Word2vec | 2023-07-04T14:58:28Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:08:19Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 260073 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: lemmatization and postag with the algorith Global Vectors with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_19", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/19.zip |
Word2vec/nlpl_17 | Word2vec | 2023-07-04T14:57:55Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:07:44Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 259882 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_17", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/17.zip |
Word2vec/nlpl_16 | Word2vec | 2023-07-04T14:57:32Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:Gigaword_5th_Edition",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:07:27Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: Gigaword_5th_Edition
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 292967 corresponding to 4815382730 tokens from the dataset `Gigaword_5th_Edition`.
The model is trained with the following properties: no lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_16", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/16.zip |
Word2vec/nlpl_14 | Word2vec | 2023-07-04T14:56:57Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:Gigaword_5th_Edition",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:06:53Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: Gigaword_5th_Edition
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 292967 corresponding to 4815382730 tokens from the dataset `Gigaword_5th_Edition`.
The model is trained with the following properties: no lemmatization and postag with the algorith Global Vectors with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_14", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/14.zip
|
Word2vec/nlpl_9 | Word2vec | 2023-07-04T14:55:43Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-04T10:05:14Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 273930 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_9", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/9.zip |
mcamara/ppo-PyramidsRND1 | mcamara | 2023-07-04T14:50:48Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-04T14:50:43Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mcamara/ppo-PyramidsRND1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
carbon225/vit-base-patch16-224-hentai | carbon225 | 2023-07-04T14:50:00Z | 225 | 19 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"art",
"anime",
"visual-novel",
"nsfw",
"dataset:carbon225/vndb_img",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-09-30T12:06:40Z | ---
license: cc0-1.0
widget:
- src: >-
https://huggingface.co/carbon225/vit-base-patch16-224-hentai/resolve/main/samples/1.jpeg
- src: >-
https://huggingface.co/carbon225/vit-base-patch16-224-hentai/resolve/main/samples/2.jpeg
datasets:
- carbon225/vndb_img
tags:
- art
- anime
- visual-novel
- nsfw
---
# ViT for NSFW classification
## Model info
This is Google's [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k)
finetuned for flagging images according to [vndb.org](https://vndb.org/d19) with 3 classes:
- safe
- suggestive
- explicit
## Training data
The model was trained on the vndb.org [database dump](https://vndb.org/d14)
using full size screenshots (`sf` in the database dump).
The dataset can be loaded from [carbon225/vndb_img](https://huggingface.co/datasets/carbon225/vndb_img).
## Intended use
The model can be used for flagging anime-style images for sexual content.
It can also be finetuned on other tasks related to anime images. |
rafaelelter/Taxi-v3 | rafaelelter | 2023-07-04T14:38:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-04T14:38:49Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.64
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rafaelelter/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jiwoochris/ko_law_alpaca-12.8b | jiwoochris | 2023-07-04T14:31:25Z | 3 | 2 | peft | [
"peft",
"region:us"
] | null | 2023-07-04T12:40:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
osunlp/BioVocabBERT | osunlp | 2023-07-04T14:26:56Z | 117 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"arxiv:2306.17649",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-05T17:57:26Z | This biomedical language model uses a specialized biomedical tokenizer which is more closely aligned with human-morphological judgements than previous biomedical tokenizers such as PubMedBERT.
Details about our tokenizer design, pre-training procedure and downstream results can be found in our [BioNLP @ ACL 2023 paper](http://arxiv.org/pdf/2306.17649.pdf)
---
license: apache-2.0
---
|
Apoorvakoira/wizabc | Apoorvakoira | 2023-07-04T14:23:44Z | 8 | 1 | transformers | [
"transformers",
"gpt_bigcode",
"text-generation",
"arxiv:2306.08568",
"license:bigcode-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-04T13:45:23Z | ---
license: bigcode-openrail-m
---
This is the Full-Weight of WizardCoder.
**Repository**: https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
**Twitter**: https://twitter.com/WizardLM_AI/status/1669109414559911937
**Paper**: [WizardCoder: Empowering Code Large Language Models with Evol-Instruct](https://arxiv.org/abs/2306.08568)
# WizardCoder: Empowering Code Large Language Models with Evol-Instruct
To develop our WizardCoder model, we begin by adapting the Evol-Instruct method specifically for coding tasks. This involves tailoring the prompt to the domain of code-related instructions. Subsequently, we fine-tune the Code LLM, StarCoder, utilizing the newly created instruction-following training set.
## News
- 🔥 Our **WizardCoder-15B-v1.0** model achieves the **57.3 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval), which is **22.3** points higher than the SOTA open-source Code LLMs.
- 🔥 We released **WizardCoder-15B-v1.0** trained with **78k** evolved code instructions. Please checkout the [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0), and [Paper]().
- 📣 Please refer to our Twitter account https://twitter.com/WizardLM_AI and HuggingFace Repo https://huggingface.co/WizardLM . We will use them to announce any new release at the 1st time.
## Comparing WizardCoder with the Closed-Source Models.
🔥 The following figure shows that our **WizardCoder attains the third position in this benchmark**, surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/pass1.png" alt="WizardCoder" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
</p>
❗**Note: In this study, we copy the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a **single attempt**, and the resulting pass rate percentage is reported. Our **WizardCoder** generates answers using greedy decoding and tests with the same [code](https://github.com/evalplus/evalplus).**
## Comparing WizardCoder with the Open-Source Models.
The following table clearly demonstrates that our **WizardCoder** exhibits a substantial performance advantage over all the open-source models. ❗**If you are confused with the different scores of our model (57.3 and 59.8), please check the Notes.**
| Model | HumanEval Pass@1 | MBPP Pass@1 |
|------------------|------------------|-------------|
| CodeGen-16B-Multi| 18.3 |20.9 |
| CodeGeeX | 22.9 |24.4 |
| LLaMA-33B | 21.7 |30.2 |
| LLaMA-65B | 23.7 |37.7 |
| PaLM-540B | 26.2 |36.8 |
| PaLM-Coder-540B | 36.0 |47.0 |
| PaLM 2-S | 37.6 |50.0 |
| CodeGen-16B-Mono | 29.3 |35.3 |
| Code-Cushman-001 | 33.5 |45.9 |
| StarCoder-15B | 33.6 |43.6* |
| InstructCodeT5+ | 35.0 |-- |
| WizardLM-30B 1.0| 37.8 |-- |
| WizardCoder-15B 1.0 | **57.3** |**51.8** |
❗**Note: The reproduced result of StarCoder on MBPP.**
❗**Note: The above table conducts a comprehensive comparison of our **WizardCoder** with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating **20 samples** for each problem to estimate the pass@1 score and evaluate with the same [code](https://github.com/openai/human-eval/tree/master). The scores of GPT4 and GPT3.5 reported by [OpenAI](https://openai.com/research/gpt-4) are 67.0 and 48.1 (maybe these are the early version GPT4&3.5).**
## Call for Feedbacks
We welcome everyone to use your professional and difficult instructions to evaluate WizardCoder, and show us examples of poor performance and your suggestions in the [issue discussion](https://github.com/nlpxucan/WizardLM/issues) area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardCoder. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it.
## Contents
1. [Online Demo](#online-demo)
2. [Fine-tuning](#fine-tuning)
3. [Inference](#inference)
4. [Evaluation](#evaluation)
5. [Citation](#citation)
6. [Disclaimer](#disclaimer)
## Online Demo
We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many **real-world** and **challenging** code-related problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks.
## Fine-tuning
We fine-tune WizardCoder using the modified code `train.py` from [Llama-X](https://github.com/AetherCortex/Llama-X).
We fine-tune StarCoder-15B with the following hyperparameters:
| Hyperparameter | StarCoder-15B |
|----------------|---------------|
| Batch size | 512 |
| Learning rate | 2e-5 |
| Epochs | 3 |
| Max length | 2048 |
| Warmup step | 30 |
| LR scheduler | cosine |
To reproduce our fine-tuning of WizardCoder, please follow the following steps:
1. According to the instructions of [Llama-X](https://github.com/AetherCortex/Llama-X), install the environment, download the training code, and deploy. (Note: `deepspeed==0.9.2` and `transformers==4.29.2`)
2. Replace the `train.py` with the `train_wizardcoder.py` in our repo (`src/train_wizardcoder.py`)
3. Login Huggingface:
```bash
huggingface-cli login
```
4. Execute the following training command:
```bash
deepspeed train_wizardcoder.py \
--model_name_or_path "bigcode/starcoder" \
--data_path "/your/path/to/code_instruction_data.json" \
--output_dir "/your/path/to/ckpt" \
--num_train_epochs 3 \
--model_max_length 2048 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 4 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 50 \
--save_total_limit 2 \
--learning_rate 2e-5 \
--warmup_steps 30 \
--logging_steps 2 \
--lr_scheduler_type "cosine" \
--report_to "tensorboard" \
--gradient_checkpointing True \
--deepspeed configs/deepspeed_config.json \
--fp16 True
```
## Inference
We provide the decoding script for WizardCoder, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file.
You can specify `base_model`, `input_data_path` and `output_data_path` in `src\inference_wizardcoder.py` to set the decoding model, path of input file and path of output file.
```bash
pip install jsonlines
```
The decoding command is:
```
python src\inference_wizardcoder.py \
--base_model "/your/path/to/ckpt" \
--input_data_path "/your/path/to/input/data.jsonl" \
--output_data_path "/your/path/to/output/result.jsonl"
```
The format of `data.jsonl` should be:
```
{"idx": 11, "Instruction": "Write a Python code to count 1 to 10."}
{"idx": 12, "Instruction": "Write a Jave code to sum 1 to 10."}
```
The prompt for our WizardCoder in `src\inference_wizardcoder.py` is:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
## Evaluation
We provide the evaluation script on HumanEval for WizardCoder.
1. According to the instructions of [HumanEval](https://github.com/openai/human-eval), install the environment.
2. Run the following script to generate the answer.
```bash
model="/path/to/your/model"
temp=0.2
max_len=2048
pred_num=200
num_seqs_per_iter=2
output_path=preds/T${temp}_N${pred_num}
mkdir -p ${output_path}
echo 'Output path: '$output_path
echo 'Model to eval: '$model
# 164 problems, 21 per GPU if GPU=8
index=0
gpu_num=8
for ((i = 0; i < $gpu_num; i++)); do
start_index=$((i * 21))
end_index=$(((i + 1) * 21))
gpu=$((i))
echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
((index++))
(
CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \
--start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
--num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path}
) &
if (($index % $gpu_num == 0)); then wait; fi
done
```
3. Run the post processing code `src/process_humaneval.py` to collect the code completions from all answer files.
```bash
output_path=preds/T${temp}_N${pred_num}
echo 'Output path: '$output_path
python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
evaluate_functional_correctness ${output_path}.jsonl
```
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{luo2023wizardcoder,
title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang},
year={2023},
}
```
## Disclaimer
The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of WizardCoder is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
|
vivekraina/falcon-7b-8bit | vivekraina | 2023-07-04T14:16:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2023-07-04T11:58:12Z | 
# 🚀 Falcon-7B 8-bit Model
This repository is home to the 8-bit of Falcon-7B model, converted from its original model (https://huggingface.co/tiiuae/falcon-7b).
Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
Usage
You can use this model directly with a pipeline for tasks such as text generation and instruction following:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "vivekraina/falcon-7b-8bit"
tokenizer = AutoTokenizer.from_pretrained(model)
pipe = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
trust_remote_code=True
)
sequences = pipe(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
``` |
nferroukhi/peft-ufalcon-7B | nferroukhi | 2023-07-04T13:53:18Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-04T13:52:17Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
maxkskhor/Taxi-v3 | maxkskhor | 2023-07-04T13:48:19Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-04T13:48:18Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="maxkskhor/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Collab-uniba/github-issues-preprocessed-mpnet-st-e10 | Collab-uniba | 2023-07-04T13:28:35Z | 5 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-04T13:22:12Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# GitHub Issues Preprocessed MPNet Sentence Transformer (10 Epochs)
This is a [sentence-transformers](https://www.SBERT.net) model, specific for GitHub Issue data.
## Dataset
For training, we used the [NLBSE22 dataset](https://nlbse2022.github.io/tools/), after removing issues with empty body and duplicates.
Similarity between title and body was used to train the sentence embedding model.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Collab-uniba/github-issues-preprocessed-mpnet-st-e10')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Collab-uniba/github-issues-preprocessed-mpnet-st-e10')
model = AutoModel.from_pretrained('Collab-uniba/github-issues-preprocessed-mpnet-st-e10')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 43709 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 43709,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jimregan/psst-partial-timit | jimregan | 2023-07-04T13:14:23Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:jimregan/psst",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-06T08:30:28Z | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
datasets:
- jimregan/psst
- timit_asr
---
This repository contains a number of experiments for the [PSST Challenge](https://psst.study/).
As the test set is unavailable, all numbers are based on the validation set.
The experiments in the tables below were finetuned on [Wav2vec 2.0 Base, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec)
Our overall best performing model (**FER** 9\.2%, **PER:** 21\.0%) was based on [Wav2vec 2.0 Large, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec) (git tag: `larger-rir`), with the TIMIT subset augmented with Room Impulse Response, based on the experiments below, on the base model.
## Augmented TIMIT subset
Using a subset of TIMIT that could map easily to the phoneset used by the PSST Challenge data (a list of IDs are in the repository), we experimented with augmenting the data to better match the PSST data.
The best results were obtained using Room Impulse Response (tag: `rir`)
| **Augmentation** | **FER** | **PER** | **Git tag** |
| :----------------------------------------------- | :-------- | :--------- | :---------------------------------- |
| unaugmented | 10\.2% | 22\.5% | huggingface-unaugmented |
| Gaussian noise | 10\.0% | 22\.1% | gaussian |
| Pitchshift | 9\.6% | 22\.9% | pitchshift |
| RIR | **9\.6%** | **21\.8%** | rir |
| Time stretch | 10\.1% | 22\.8% | timestretch |
| Gaussian noise + RIR | 10\.0% | 23\.4% | gaussian-rir |
| Pitchshift + Gaussian noise | 9\.9% | 22\.9% | pitchshift-gaussian |
| Pitchshift + RIR | 9\.9% | 22\.8% | pitchshift-rir |
| Tim estretch + Gaussian noise | 10\.2% | 22\.8% | timestretch-gaussian |
| Time stretch + Pitchshift | 9\.8% | 22\.0% | timestretch-pitchshift |
| Time stretch + RIR | 9\.7% | 22\.2% | timestretch-rir |
| Pitchshift + Gaussian noise + RIR | 10\.1% | 23\.5% | pitchshift-gaussian-rir |
| Time stretch + Gaussian noise + RIR | 9\.7% | 22\.3% | timestretch-gaussian-rir |
| Time stretch + Pitchshift + Gaussian noise | 10\.2% | 22\.9% | timestretch-pitchshift-gaussian |
| Time stretch + Pitchshift + RIR | 10\.2% | 22\.5% | timestretch-pitchshift-rir |
| Time stretch + Pitchshift + Gaussian noise + RIR | 10\.9% | 24\.1% | timestretch-pitchshift-gaussian-rir |
## LM experiments
We experimented with a number of language model configurations, combining the data from the PSST challenge, the subset of TIMIT we used, and CMUdict.
We tried combining CMUdict data in a number of ways: unmodified, with a silence token added at the start of the pronunciation, at the end, and at both the start and the end.
The best result was from a 5-gram model, with silences added at the end of the CMUdict data (git tag: `lm-nosil-cmudict-sile.5`).
Evaluation was performed using scripts provided by the PSST Challenge's organisers, so there are no scripts in place to automatically use the LM with the transformers library.
| | **n-gram** | **FER** | **PER** | **Tag** |
| :----------------------------- | :--------- | :--------- | :--------- | :--------- |
| Baseline + TIMIT | --- | **10\.2%** | 22\.5% | huggingface-unaugmented |
| All silences | 4 | 10\.5% | 23\.0% | lm-allsil.4 |
| | 5 | 10\.5% | 22\.6% | lm-allsil.5 |
| | 6 | 10\.3% | 22\.3% | lm-allsil.6 |
| No silences | 4 | 10\.3% | 22\.6% | lm-nosil.4 |
| | 5 | **10\.2%** | 22\.2% | lm-nosil.5 |
| | 6 | **10\.2%** | 22\.4% | lm-nosil.6 |
| PSST and TIMIT without silence | | | | |
| Unmodified CMUdict | 4 | 10\.3% | 22\.6% | lm-nosil-cmudict-nosil.4 |
| | 5 | 10\.2% | 22\.2% | lm-nosil-cmudict-nosil.5 |
| | 6 | **10\.2%** | 22\.4% | lm-nosil-cmudict-nosil.6 |
| CMUdict-end | 4 | 10\.3% | 22\.6% | lm-nosil-cmudict-sile.4 |
| | 5 | **10\.2%** | **22\.1%** | lm-nosil-cmudict-sile.5 |
| | 6 | **10\.2%** | 22\.3% | lm-nosil-cmudict-sile.6 |
| CMUdict-start | 4 | 10\.4% | 22\.6% | lm-nosil-cmudict-sils.4 |
| | 5 | 10\.3% | 22\.4% | lm-nosil-cmudict-sils.5 |
| | 6 | 10\.3% | 22\.3% | lm-nosil-cmudict-sils.6 |
| CMUdict-both | 4 | 10\.4% | 22\.7% | lm-nosil-cmudict-silb.4 |
| | 5 | 10\.4% | 22\.3% | lm-nosil-cmudict-silb.5 |
| | 6 | 10\.3% | 22\.3% | lm-nosil-cmudict-silb.6 |
| Unmodified PSST and TIMIT | | | | |
| Unmodified CMUdict | 4 | 10\.3% | 22\.8% | lm-orig-cmudict-nosil.4 |
| | 5 | 10\.3% | 22\.4% | lm-orig-cmudict-nosil.5 |
| | 6 | **10\.2%** | 22\.4% | lm-orig-cmudict-nosil.6 |
| CMUdict-end | 4 | 10\.3% | 22\.7% | lm-orig-cmudict-sile.4 |
| | 5 | **10\.2%** | 22\.2% | lm-orig-cmudict-sile.5 |
| | 6 | **10\.2%** | 22\.3% | lm-orig-cmudict-sile.6 |
| CMUdict-start | 4 | 10\.5% | 22\.8% | lm-orig-cmudict-sils.4 |
| | 5 | 10\.4% | 22\.5% | lm-orig-cmudict-sils.5 |
| | 6 | 10\.3% | 22\.4% | lm-orig-cmudict-sils.6 |
| CMUdict-both | 4 | 10\.5% | 22\.8% | lm-orig-cmudict-silb.4 |
| | 5 | 10\.4% | 22\.4% | lm-orig-cmudict-silb.5 |
| | 6 | 10\.4% | 22\.4% | lm-orig-cmudict-silb.6 |
|
vivekraina/falcon-7b-Instruct-8bit | vivekraina | 2023-07-04T13:10:57Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2023-07-04T12:16:46Z | 
# 🚀 Falcon-7B 8-bit Model
This repository is home to the 8-bit of Falcon-7B model, converted from its original model (https://huggingface.co/tiiuae/falcon-7b).
Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
Usage
You can use this model directly with a pipeline for tasks such as text generation and instruction following:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "vivekraina/falcon-7b-8bit"
tokenizer = AutoTokenizer.from_pretrained(model)
pipe = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
trust_remote_code=True
)
sequences = pipe(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
``` |
pratikg123/finetunned_falcon-7b | pratikg123 | 2023-07-04T13:10:35Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-04T12:45:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
tanmayyyj/dqn-SpaceInvadersNoFrameskip-v4 | tanmayyyj | 2023-07-04T13:09:53Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-04T13:09:15Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 627.00 +/- 271.64
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tanmayyyj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tanmayyyj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tanmayyyj
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
dcarpintero/ppo-SnowballTarget | dcarpintero | 2023-07-04T13:01:15Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-04T13:01:12Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dcarpintero/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Babaili/swin-tiny-patch4-window7-224-finetuned-eurosat | Babaili | 2023-07-04T12:52:09Z | 211 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-27T21:58:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9522222222222222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1357
- Accuracy: 0.9522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2724 | 1.0 | 190 | 0.1357 | 0.9522 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fragdata/ppo-LunarLander-v2 | fragdata | 2023-07-04T12:32:20Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-04T12:32:02Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.37 +/- 16.25
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
d0rj/ruRoberta-distilled | d0rj | 2023-07-04T12:30:41Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"distill",
"embeddings",
"masked-lm",
"tiny",
"sentence-similarity",
"ru",
"dataset:GEM/wiki_lingua",
"dataset:xnli",
"dataset:RussianNLP/wikiomnia",
"dataset:mlsum",
"dataset:IlyaGusev/gazeta",
"doi:10.57967/hf/0856",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-04T10:35:40Z | ---
license: apache-2.0
language:
- ru
tags:
- distill
- fill-mask
- embeddings
- masked-lm
- tiny
- sentence-similarity
datasets:
- GEM/wiki_lingua
- xnli
- RussianNLP/wikiomnia
- mlsum
- IlyaGusev/gazeta
widget:
- text: Москва - <mask> России.
- text: Если б море было пивом, я бы <mask>
- text: Столица России - <mask>.
library_name: transformers
pipeline_tag: fill-mask
---
# ruRoberta-distilled
Model was distilled from [ai-forever/ruRoberta-large](https://huggingface.co/ai-forever/ruRoberta-large) with ❤️ by me.
## Usage
```python
from transformers import pipeline
pipe = pipeline('feature-extraction', model='d0rj/ruRoberta-distilled')
tokens_embeddings = pipe('Привет, мир!')
```
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('d0rj/ruRoberta-distilled')
model = AutoModel.from_pretrained('d0rj/ruRoberta-distilled')
def embed_bert_cls(text: str) -> torch.Tensor:
t = tokenizer(text, padding=True, truncation=True, return_tensors='pt').to(model.device)
with torch.no_grad():
model_output = model(**t)
embeddings = model_output.last_hidden_state[:, 0, :]
embeddings = torch.nn.functional.normalize(embeddings)
return embeddings[0].cpu()
embedding = embed_bert_cls('Привет, мир!')
```
## Logs
Distillation process lasts for 120 hours on 4 Nvidia V100.
See all logs at [WandB](https://wandb.ai/d0rj/distill-ruroberta/runs/lehtr3bk/workspace).
## Configuration changes
- Activation GELU -> GELUFast
- Attention heads 16 -> 8
- Hidden layers 24 -> 6
- Weights size 1.42 GB -> 464 MB
## Data
Overall: 9.4 GB of raw texts, 5.1 GB of binarized texts.
Only texts in Russian were used for distillation. I do not know how the model behaves in Englishю
Used data:
- [GEM/wiki_lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [xnli](https://huggingface.co/datasets/xnli)
- [RussianNLP/wikiomnia](https://huggingface.co/datasets/RussianNLP/wikiomnia)
- [mlsum](https://huggingface.co/datasets/mlsum)
- [IlyaGusev/gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta) |
juancopi81/lmd-8bars-2048-epochs10 | juancopi81 | 2023-07-04T12:23:11Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-01T23:26:04Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: lmd-8bars-2048-epochs10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmd-8bars-2048-epochs10
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 4
- seed: 1
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4182 | 0.5 | 4994 | 1.4933 |
| 1.4626 | 1.0 | 9988 | 1.3082 |
| 1.3176 | 1.5 | 14982 | 1.2276 |
| 1.2604 | 2.0 | 19976 | 1.1815 |
| 1.2101 | 2.5 | 24970 | 1.1499 |
| 1.1804 | 3.0 | 29964 | 1.1260 |
| 1.1517 | 3.5 | 34958 | 1.1043 |
| 1.1349 | 4.0 | 39952 | 1.0887 |
| 1.1133 | 4.5 | 44946 | 1.0762 |
| 1.0995 | 5.0 | 49940 | 1.0618 |
| 1.0824 | 5.5 | 54934 | 1.0507 |
| 1.0713 | 6.0 | 59928 | 1.0423 |
| 1.0552 | 6.5 | 64922 | 1.0328 |
| 1.0505 | 7.0 | 69916 | 1.0279 |
| 1.0365 | 7.5 | 74910 | 1.0217 |
| 1.0307 | 8.0 | 79904 | 1.0153 |
| 1.022 | 8.5 | 84898 | 1.0107 |
| 1.0189 | 9.0 | 89892 | 1.0090 |
| 1.0129 | 9.5 | 94886 | 1.0084 |
| 1.0139 | 10.0 | 99880 | 1.0086 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
iammartian0/whisper-base-finetuned-gtzan | iammartian0 | 2023-07-04T12:17:45Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-04T11:46:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-base-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5877
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1813 | 1.0 | 113 | 1.1224 | 0.62 |
| 0.6839 | 2.0 | 226 | 0.7112 | 0.78 |
| 0.4336 | 3.0 | 339 | 0.6312 | 0.8 |
| 0.1472 | 4.0 | 452 | 0.5366 | 0.83 |
| 0.1193 | 5.0 | 565 | 0.7973 | 0.8 |
| 0.008 | 6.0 | 678 | 0.5044 | 0.87 |
| 0.1485 | 7.0 | 791 | 0.7054 | 0.86 |
| 0.0155 | 8.0 | 904 | 0.6145 | 0.87 |
| 0.1364 | 9.0 | 1017 | 0.6034 | 0.88 |
| 0.0017 | 10.0 | 1130 | 0.5877 | 0.88 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ajaycompete143/PPO_Lunar_Lander | ajaycompete143 | 2023-07-04T12:15:49Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-04T12:15:27Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 237.54 +/- 61.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/gpt2-dp-mod-datasets-rarity2 | NasimB | 2023-07-04T12:11:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-04T09:44:27Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-mod-datasets-rarity2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-mod-datasets-rarity2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6964 | 0.28 | 500 | 5.6571 |
| 5.3695 | 0.56 | 1000 | 5.2302 |
| 5.0252 | 0.83 | 1500 | 4.9783 |
| 4.7727 | 1.11 | 2000 | 4.8337 |
| 4.6037 | 1.39 | 2500 | 4.7203 |
| 4.4995 | 1.67 | 3000 | 4.6237 |
| 4.4109 | 1.94 | 3500 | 4.5399 |
| 4.1994 | 2.22 | 4000 | 4.5071 |
| 4.1606 | 2.5 | 4500 | 4.4425 |
| 4.1134 | 2.78 | 5000 | 4.3980 |
| 4.0337 | 3.05 | 5500 | 4.3731 |
| 3.8408 | 3.33 | 6000 | 4.3581 |
| 3.8431 | 3.61 | 6500 | 4.3268 |
| 3.8253 | 3.89 | 7000 | 4.2934 |
| 3.6561 | 4.16 | 7500 | 4.3160 |
| 3.5535 | 4.44 | 8000 | 4.3077 |
| 3.5564 | 4.72 | 8500 | 4.2849 |
| 3.5441 | 5.0 | 9000 | 4.2669 |
| 3.296 | 5.27 | 9500 | 4.3047 |
| 3.2948 | 5.55 | 10000 | 4.2986 |
| 3.2913 | 5.83 | 10500 | 4.2950 |
| 3.2305 | 6.11 | 11000 | 4.3041 |
| 3.1394 | 6.39 | 11500 | 4.3095 |
| 3.1341 | 6.66 | 12000 | 4.3099 |
| 3.1359 | 6.94 | 12500 | 4.3096 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
BadreddineHug/donut-base-ocr3 | BadreddineHug | 2023-07-04T12:09:53Z | 72 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-07-04T11:22:07Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-ocr3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-ocr3
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ddoc/adt | ddoc | 2023-07-04T12:02:45Z | 0 | 1 | null | [
"region:us"
] | null | 2023-07-04T12:02:27Z | # !After Detailer
!After Detailer is a extension for stable diffusion webui, similar to Detection Detailer, except it uses ultralytics instead of the mmdet.
## Install
(from Mikubill/sd-webui-controlnet)
1. Open "Extensions" tab.
2. Open "Install from URL" tab in the tab.
3. Enter `https://github.com/Bing-su/adetailer.git` to "URL for extension's git repository".
4. Press "Install" button.
5. Wait 5 seconds, and you will see the message "Installed into stable-diffusion-webui\extensions\adetailer. Use Installed tab to restart".
6. Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI". (The next time you can also use this method to update extensions.)
7. Completely restart A1111 webui including your terminal. (If you do not know what is a "terminal", you can reboot your computer: turn your computer off and turn it on again.)
You can now install it directly from the Extensions tab.

You **DON'T** need to download any model from huggingface.
## Options
| Model, Prompts | | |
| --------------------------------- | ------------------------------------- | ------------------------------------------------- |
| ADetailer model | Determine what to detect. | `None` = disable |
| ADetailer prompt, negative prompt | Prompts and negative prompts to apply | If left blank, it will use the same as the input. |
| Detection | | |
| ------------------------------------ | -------------------------------------------------------------------------------------------- | --- |
| Detection model confidence threshold | Only objects with a detection model confidence above this threshold are used for inpainting. | |
| Mask min/max ratio | Only use masks whose area is between those ratios for the area of the entire image. | |
If you want to exclude objects in the background, try setting the min ratio to around `0.01`.
| Mask Preprocessing | | |
| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
| Mask x, y offset | Moves the mask horizontally and vertically by | |
| Mask erosion (-) / dilation (+) | Enlarge or reduce the detected mask. | [opencv example](https://docs.opencv.org/4.7.0/db/df6/tutorial_erosion_dilatation.html) |
| Mask merge mode | `None`: Inpaint each mask<br/>`Merge`: Merge all masks and inpaint<br/>`Merge and Invert`: Merge all masks and Invert, then inpaint | |
Applied in this order: x, y offset → erosion/dilation → merge/invert.
#### Inpainting

Each option corresponds to a corresponding option on the inpaint tab.
## ControlNet Inpainting
You can use the ControlNet extension if you have ControlNet installed and ControlNet models.
Support `inpaint, scribble, lineart, openpose, tile` controlnet models. Once you choose a model, the preprocessor is set automatically.
## Model
| Model | Target | mAP 50 | mAP 50-95 |
| --------------------- | --------------------- | ----------------------------- | ----------------------------- |
| face_yolov8n.pt | 2D / realistic face | 0.660 | 0.366 |
| face_yolov8s.pt | 2D / realistic face | 0.713 | 0.404 |
| hand_yolov8n.pt | 2D / realistic hand | 0.767 | 0.505 |
| person_yolov8n-seg.pt | 2D / realistic person | 0.782 (bbox)<br/>0.761 (mask) | 0.555 (bbox)<br/>0.460 (mask) |
| person_yolov8s-seg.pt | 2D / realistic person | 0.824 (bbox)<br/>0.809 (mask) | 0.605 (bbox)<br/>0.508 (mask) |
| mediapipe_face_full | realistic face | - | - |
| mediapipe_face_short | realistic face | - | - |
| mediapipe_face_mesh | realistic face | - | - |
The yolo models can be found on huggingface [Bingsu/adetailer](https://huggingface.co/Bingsu/adetailer).
### User Model
Put your [ultralytics](https://github.com/ultralytics/ultralytics) model in `webui/models/adetailer`. The model name should end with `.pt` or `.pth`.
It must be a bbox detection or segment model and use all label.
### Dataset
Datasets used for training the yolo models are:
#### Face
- [Anime Face CreateML](https://universe.roboflow.com/my-workspace-mph8o/anime-face-createml)
- [xml2txt](https://universe.roboflow.com/0oooooo0/xml2txt-njqx1)
- [AN](https://universe.roboflow.com/sed-b8vkf/an-lfg5i)
- [wider face](http://shuoyang1213.me/WIDERFACE/index.html)
#### Hand
- [AnHDet](https://universe.roboflow.com/1-yshhi/anhdet)
- [hand-detection-fuao9](https://universe.roboflow.com/catwithawand/hand-detection-fuao9)
#### Person
- [coco2017](https://cocodataset.org/#home) (only person)
- [AniSeg](https://github.com/jerryli27/AniSeg)
- [skytnt/anime-segmentation](https://huggingface.co/datasets/skytnt/anime-segmentation)
## Example


[](https://ko-fi.com/F1F1L7V2N)
|
fatcat22/rl_course_vizdoom_health_gathering_supreme | fatcat22 | 2023-07-04T11:52:55Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-04T11:52:52Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 7.46 +/- 2.25
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r fatcat22/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
cv43/distilbert-base-uncased-finetuned-squad | cv43 | 2023-07-04T11:51:02Z | 133 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-03T12:52:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 190 | 2.0763 |
| No log | 2.0 | 380 | 1.6763 |
| 2.3144 | 3.0 | 570 | 1.5644 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NbAiLab/nb-wav2vec2-kenlm | NbAiLab | 2023-07-04T11:49:43Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
license: apache-2.0
---
## Citation
```bibtex
@inproceedings{de-la-rosa-etal-2023-boosting,
title = "Boosting {N}orwegian Automatic Speech Recognition",
author = "De La Rosa, Javier and
Braaten, Rolv-Arild and
Kummervold, Per and
Wetjen, Freddy",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.55",
pages = "555--564",
abstract = "In this paper, we present several baselines for automatic speech recognition (ASR) models for the two official written languages in Norway: Bokm{\aa}l and Nynorsk. We compare the performance of models of varying sizes and pre-training approaches on multiple Norwegian speech datasets. Additionally, we measure the performance of these models against previous state-of-the-art ASR models, as well as on out-of-domain datasets. We improve the state of the art on the Norwegian Parliamentary Speech Corpus (NPSC) from a word error rate (WER) of 17.10{\%} to 7.60{\%}, with models achieving 5.81{\%} for Bokm{\aa}l and 11.54{\%} for Nynorsk. We also discuss the challenges and potential solutions for further improving ASR models for Norwegian.",
}
``` |
LarryAIDraw/CHAR-Kord | LarryAIDraw | 2023-07-04T11:47:18Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-04T11:32:25Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/100517/kord-or-girls-frontline |
Bilgilice/bilgilice35 | Bilgilice | 2023-07-04T11:46:09Z | 0 | 0 | null | [
"arxiv:1703.10135",
"arxiv:1712.05884",
"arxiv:2005.11129",
"arxiv:2008.03802",
"arxiv:2003.01950",
"arxiv:2006.06873",
"arxiv:1905.09263",
"arxiv:2006.04558",
"arxiv:2104.05557",
"arxiv:1906.03402",
"arxiv:2211.06892",
"arxiv:2108.13320",
"arxiv:2106.06103",
"arxiv:2112.02418",
"arxiv:1710.08969",
"arxiv:1907.09006",
"arxiv:1910.10288",
"arxiv:2108.10447",
"arxiv:1710.10467",
"arxiv:2003.11982",
"arxiv:1910.06711",
"arxiv:2005.05106",
"arxiv:1910.11480",
"arxiv:1909.11646",
"arxiv:2009.00713",
"arxiv:2010.05646",
"arxiv:2106.07889",
"arxiv:2210.15418",
"region:us"
] | null | 2023-07-04T11:44:42Z |
## 🐸Coqui.ai News
- 📣 [🐶Bark](https://github.com/suno-ai/bark) is now available for inference with uncontrained voice cloning. [Docs](https://tts.readthedocs.io/en/dev/models/bark.html)
- 📣 You can use [~1100 Fairseq models](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS.
- 📣 🐸TTS now supports 🐢Tortoise with faster inference. [Docs](https://tts.readthedocs.io/en/dev/models/tortoise.html)
- 📣 **Coqui Studio API** is landed on 🐸TTS. - [Example](https://github.com/coqui-ai/TTS/blob/dev/README.md#-python-api)
- 📣 [**Coqui Studio API**](https://docs.coqui.ai/docs) is live.
- 📣 Voice generation with prompts - **Prompt to Voice** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin)!! - [Blog Post](https://coqui.ai/blog/tts/prompt-to-voice)
- 📣 Voice generation with fusion - **Voice fusion** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin).
- 📣 Voice cloning is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin).
## <img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/coqui-log-green-TTS.png" height="56"/>
🐸TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality.
🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in **20+ languages** for products and research projects.
[](https://discord.gg/5eXr5seRrv)
[](https://opensource.org/licenses/MPL-2.0)
[](https://badge.fury.io/py/TTS)
[](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md)
[](https://pepy.tech/project/tts)
[](https://zenodo.org/badge/latestdoi/265612440)











[](https://tts.readthedocs.io/en/latest/)
📰 [**Subscribe to 🐸Coqui.ai Newsletter**](https://coqui.ai/?subscription=true)
📢 [English Voice Samples](https://erogol.github.io/ddc-samples/) and [SoundCloud playlist](https://soundcloud.com/user-565970875/pocket-article-wavernn-and-tacotron2)
📄 [Text-to-Speech paper collection](https://github.com/erogol/TTS-papers)
<img src="https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" />
## 💬 Where to ask questions
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
| Type | Platforms |
| ------------------------------- | --------------------------------------- |
| 🚨 **Bug Reports** | [GitHub Issue Tracker] |
| 🎁 **Feature Requests & Ideas** | [GitHub Issue Tracker] |
| 👩💻 **Usage Questions** | [GitHub Discussions] |
| 🗯 **General Discussion** | [GitHub Discussions] or [Discord] |
[github issue tracker]: https://github.com/coqui-ai/tts/issues
[github discussions]: https://github.com/coqui-ai/TTS/discussions
[discord]: https://discord.gg/5eXr5seRrv
[Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials
## 🔗 Links and Resources
| Type | Links |
| ------------------------------- | --------------------------------------- |
| 💼 **Documentation** | [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
| 💾 **Installation** | [TTS/README.md](https://github.com/coqui-ai/TTS/tree/dev#install-tts)|
| 👩💻 **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)|
| 📌 **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378)
| 🚀 **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)|
## 🥇 TTS Performance
<p align="center"><img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width="800" /></p>
Underlined "TTS*" and "Judy*" are **internal** 🐸TTS models that are not released open-source. They are here to show the potential.
## Features
- High-performance Deep Learning models for Text2Speech tasks.
- Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
- Speaker Encoder to compute speaker embeddings efficiently.
- Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
- Fast and efficient model training.
- Detailed training logs on the terminal and Tensorboard.
- Support for Multi-speaker TTS.
- Efficient, flexible, lightweight but feature complete `Trainer API`.
- Released and ready-to-use models.
- Tools to curate Text2Speech datasets under```dataset_analysis```.
- Utilities to use and test your models.
- Modular (but not too much) code base enabling easy implementation of new ideas.
## Implemented Models
### Spectrogram models
- Tacotron: [paper](https://arxiv.org/abs/1703.10135)
- Tacotron2: [paper](https://arxiv.org/abs/1712.05884)
- Glow-TTS: [paper](https://arxiv.org/abs/2005.11129)
- Speedy-Speech: [paper](https://arxiv.org/abs/2008.03802)
- Align-TTS: [paper](https://arxiv.org/abs/2003.01950)
- FastPitch: [paper](https://arxiv.org/pdf/2006.06873.pdf)
- FastSpeech: [paper](https://arxiv.org/abs/1905.09263)
- FastSpeech2: [paper](https://arxiv.org/abs/2006.04558)
- SC-GlowTTS: [paper](https://arxiv.org/abs/2104.05557)
- Capacitron: [paper](https://arxiv.org/abs/1906.03402)
- OverFlow: [paper](https://arxiv.org/abs/2211.06892)
- Neural HMM TTS: [paper](https://arxiv.org/abs/2108.13320)
### End-to-End Models
- VITS: [paper](https://arxiv.org/pdf/2106.06103)
- 🐸 YourTTS: [paper](https://arxiv.org/abs/2112.02418)
- 🐢 Tortoise: [orig. repo](https://github.com/neonbjb/tortoise-tts)
- 🐶 Bark: [orig. repo](https://github.com/suno-ai/bark)
### Attention Methods
- Guided Attention: [paper](https://arxiv.org/abs/1710.08969)
- Forward Backward Decoding: [paper](https://arxiv.org/abs/1907.09006)
- Graves Attention: [paper](https://arxiv.org/abs/1910.10288)
- Double Decoder Consistency: [blog](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/)
- Dynamic Convolutional Attention: [paper](https://arxiv.org/pdf/1910.10288.pdf)
- Alignment Network: [paper](https://arxiv.org/abs/2108.10447)
### Speaker Encoder
- GE2E: [paper](https://arxiv.org/abs/1710.10467)
- Angular Loss: [paper](https://arxiv.org/pdf/2003.11982.pdf)
### Vocoders
- MelGAN: [paper](https://arxiv.org/abs/1910.06711)
- MultiBandMelGAN: [paper](https://arxiv.org/abs/2005.05106)
- ParallelWaveGAN: [paper](https://arxiv.org/abs/1910.11480)
- GAN-TTS discriminators: [paper](https://arxiv.org/abs/1909.11646)
- WaveRNN: [origin](https://github.com/fatchord/WaveRNN/)
- WaveGrad: [paper](https://arxiv.org/abs/2009.00713)
- HiFiGAN: [paper](https://arxiv.org/abs/2010.05646)
- UnivNet: [paper](https://arxiv.org/abs/2106.07889)
### Voice Conversion
- FreeVC: [paper](https://arxiv.org/abs/2210.15418)
You can also help us implement more models.
## Install TTS
🐸TTS is tested on Ubuntu 18.04 with **python >= 3.7, < 3.11.**.
If you are only interested in [synthesizing speech](https://tts.readthedocs.io/en/latest/inference.html) with the released 🐸TTS models, installing from PyPI is the easiest option.
```bash
pip install TTS
```
If you plan to code or train models, clone 🐸TTS and install it locally.
```bash
git clone https://github.com/coqui-ai/TTS
pip install -e .[all,dev,notebooks] # Select the relevant extras
```
If you are on Ubuntu (Debian), you can also run following commands for installation.
```bash
$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
$ make install
```
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system).
## Docker Image
You can also try TTS without install with the docker image.
Simply run the following command and you will be able to run TTS without installing it.
```bash
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
python3 TTS/server/server.py --list_models #To get the list of available models
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server
```
You can then enjoy the TTS server [here](http://[::1]:5002/)
More details about the docker images (like GPU support) can be found [here](https://tts.readthedocs.io/en/latest/docker_images.html)
## Synthesizing speech by 🐸TTS
### 🐍 Python API
```python
from TTS.api import TTS
# Running a multi-speaker and multi-lingual model
# List available 🐸TTS models and choose the first one
model_name = TTS.list_models()[0]
# Init TTS
tts = TTS(model_name)
# Run TTS
# ❗ Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language
# Text to speech with a numpy output
wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0])
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
# Running a single speaker model
# Init TTS with the target model name
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False, gpu=False)
# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)
# Example voice cloning with YourTTS in English, French and Portuguese
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=True)
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")
# Example voice conversion converting speaker of the `source_wav` to the speaker of the `target_wav`
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False, gpu=True)
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")
# Example voice cloning by a single speaker TTS model combining with the voice conversion model. This way, you can
# clone voices by using any model in 🐸TTS.
tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
tts.tts_with_vc_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
speaker_wav="target/speaker.wav",
file_path="output.wav"
)
# Example text to speech using [🐸Coqui Studio](https://coqui.ai) models.
# You can use all of your available speakers in the studio.
# [🐸Coqui Studio](https://coqui.ai) API token is required. You can get it from the [account page](https://coqui.ai/account).
# You should set the `COQUI_STUDIO_TOKEN` environment variable to use the API token.
# If you have a valid API token set you will see the studio speakers as separate models in the list.
# The name format is coqui_studio/en/<studio_speaker_name>/coqui_studio
models = TTS().list_models()
# Init TTS with the target studio speaker
tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False, gpu=False)
# Run TTS
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH)
# Run TTS with emotion and speed control
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5)
#Example text to speech using **Fairseq models in ~1100 languages** 🤯.
#For these models use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.
#You can find the list of language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html) and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).
# TTS with on the fly voice conversion
api = TTS("tts_models/deu/fairseq/vits")
api.tts_with_vc_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
speaker_wav="target/speaker.wav",
file_path="output.wav"
)
```
### Command line `tts`
#### Single Speaker Models
- List provided models:
```
$ tts --list_models
```
- Get model info (for both tts_models and vocoder_models):
- Query by type/name:
The model_info_by_name uses the name as it from the --list_models.
```
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
```
For example:
```
$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
```
```
$ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
```
- Query by type/idx:
The model_query_idx uses the corresponding idx from --list_models.
```
$ tts --model_info_by_idx "<model_type>/<model_query_idx>"
```
For example:
```
$ tts --model_info_by_idx tts_models/3
```
- Run TTS with default models:
```
$ tts --text "Text for TTS" --out_path output/path/speech.wav
```
- Run a TTS model with its default vocoder model:
```
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
```
For example:
```
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
```
- Run with specific TTS and vocoder models from the list:
```
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
```
For example:
```
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
```
- Run your own TTS model (Using Griffin-Lim Vocoder):
```
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
```
- Run your own TTS and Vocoder models:
```
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
--vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
```
#### Multi-speaker Models
- List the available speakers and choose a <speaker_id> among them:
```
$ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
```
- Run the multi-speaker TTS model with the target speaker ID:
```
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
```
- Run your own multi-speaker TTS model:
```
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
```
## Directory Structure
```
|- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/ (common utilities.)
|- TTS
|- bin/ (folder for all the executables.)
|- train*.py (train your target model.)
|- ...
|- tts/ (text to speech models)
|- layers/ (model layer definitions)
|- models/ (model definitions)
|- utils/ (model specific utilities.)
|- speaker_encoder/ (Speaker Encoder models.)
|- (same)
|- vocoder/ (Vocoder models.)
|- (same)
```
|
Allenpai/alpacaRec | Allenpai | 2023-07-04T11:43:15Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-04T11:42:16Z |
Training procedure
The following bitsandbytes quantization config was used during training:
load_in_8bit: True
load_in_4bit: False
llm_int8_threshold: 6.0
llm_int8_skip_modules: None
llm_int8_enable_fp32_cpu_offload: False
llm_int8_has_fp16_weight: False
bnb_4bit_quant_type: fp4
bnb_4bit_use_double_quant: False
bnb_4bit_compute_dtype: float32
Framework versions
PEFT 0.4.0.dev0 |
dcarpintero/Reinforce-Pixelcopter-PLE-v1 | dcarpintero | 2023-07-04T11:41:06Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-04T11:41:02Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 28.70 +/- 22.43
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zenoda/trocr-captcha-killer | zenoda | 2023-07-04T11:34:58Z | 182 | 4 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"en",
"zh",
"dataset:zenoda/trocr-captcha-killer",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-07-03T11:54:05Z | ---
datasets:
- zenoda/trocr-captcha-killer
language:
- en
- zh
---
accuracy: 0.937338
```
from transformers import VisionEncoderDecoderModel, TrOCRProcessor
from PIL import Image
import requests
processor = TrOCRProcessor.from_pretrained("zenoda/trocr-captcha-killer")
model = VisionEncoderDecoderModel.from_pretrained("zenoda/trocr-captcha-killer")
model.to('cuda')
url = 'https://huggingface.co/datasets/zenoda/trocr-captcha-killer/resolve/main/106-1688354008849.png'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
generated_ids = model.generate(processor(image, return_tensors="pt").pixel_values.to('cuda'))
predictText = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(predictText)
``` |
BaoKien/xlnet-base-cased-finetuned-squad-v2 | BaoKien | 2023-07-04T11:33:07Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-04T07:18:15Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: xlnet-base-cased-finetuned-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-squad-v2
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2719 | 1.0 | 8265 | 0.2361 |
| 0.172 | 2.0 | 16530 | 0.2484 |
| 0.1236 | 3.0 | 24795 | 0.3111 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
iammartian0/whisper-tiny-finetuned-gtzan | iammartian0 | 2023-07-04T11:08:08Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-04T10:40:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-tiny-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4342
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7087 | 0.99 | 56 | 1.6682 | 0.53 |
| 1.0139 | 2.0 | 113 | 1.1272 | 0.64 |
| 0.8057 | 2.99 | 169 | 0.7579 | 0.79 |
| 0.393 | 4.0 | 226 | 0.5791 | 0.86 |
| 0.3414 | 4.99 | 282 | 0.5055 | 0.86 |
| 0.1083 | 6.0 | 339 | 0.4109 | 0.9 |
| 0.0783 | 6.99 | 395 | 0.4297 | 0.87 |
| 0.0998 | 8.0 | 452 | 0.4627 | 0.87 |
| 0.0119 | 8.99 | 508 | 0.4410 | 0.87 |
| 0.0095 | 9.91 | 560 | 0.4342 | 0.87 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vivekraina/falcon-7b-4bit | vivekraina | 2023-07-04T10:47:09Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-04T10:46:07Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Falah/Alzheimer_classification_model | Falah | 2023-07-04T10:45:54Z | 214 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-04T09:34:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Alzheimer_classification_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Alzheimer_classification_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4065
- Accuracy: 0.8375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.692 | 1.0 | 80 | 0.8592 | 0.6258 |
| 0.662 | 2.0 | 160 | 0.7454 | 0.6781 |
| 0.6124 | 3.0 | 240 | 0.6895 | 0.6922 |
| 0.5851 | 4.0 | 320 | 0.6332 | 0.7430 |
| 0.5495 | 5.0 | 400 | 0.5804 | 0.7586 |
| 0.4334 | 6.0 | 480 | 0.6068 | 0.7484 |
| 0.4169 | 7.0 | 560 | 0.5168 | 0.7883 |
| 0.3709 | 8.0 | 640 | 0.4768 | 0.8055 |
| 0.2854 | 9.0 | 720 | 0.4641 | 0.8117 |
| 0.3064 | 10.0 | 800 | 0.4065 | 0.8375 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
Anwaarma/autotrain-enhancedauto-72049138834 | Anwaarma | 2023-07-04T10:45:09Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:Anwaarma/autotrain-data-enhancedauto",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-04T10:42:10Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- Anwaarma/autotrain-data-enhancedauto
co2_eq_emissions:
emissions: 1.8438978972881972
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 72049138834
- CO2 Emissions (in grams): 1.8439
## Validation Metrics
- Loss: 0.033
- Accuracy: 0.990
- Precision: 0.988
- Recall: 0.944
- AUC: 0.998
- F1: 0.966
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Anwaarma/autotrain-enhancedauto-72049138834
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anwaarma/autotrain-enhancedauto-72049138834", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anwaarma/autotrain-enhancedauto-72049138834", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
revmag/Taxi-v3 | revmag | 2023-07-04T10:43:12Z | 0 | 1 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-04T10:43:11Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="revmag/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ericNguyen0132/roberta-large-Dep-pretrain | ericNguyen0132 | 2023-07-04T10:33:09Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-04T06:57:43Z | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-large-Dep-pretrain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-Dep-pretrain
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BadreddineHug/donut-base-ocr2 | BadreddineHug | 2023-07-04T10:32:08Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-07-04T10:18:50Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-ocr2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-ocr2
This model is a fine-tuned version of [BadreddineHug/donut-base-ocr1](https://huggingface.co/BadreddineHug/donut-base-ocr1) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chenxingphh/distilbert-base-uncased-finetuned-imdb | chenxingphh | 2023-07-04T10:28:47Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-04T10:21:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
erkam/sd-clevr-sg2im-objects_cap-e2e | erkam | 2023-07-04T10:26:20Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-08T12:35:18Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - erkam/sd-clevr-sg2im-objects_cap-e2e
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the erkam/clevr-full-v4 dataset. You can find some example images in the following.
|
msladic/ppo-SnowballTarget | msladic | 2023-07-04T10:18:36Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-04T10:02:46Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: msladic/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/gpt2-cl-concat-log-rarity-9-210k-mod-datasets | NasimB | 2023-07-04T10:10:08Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-04T08:51:19Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cl-concat-log-rarity-9-210k-mod-datasets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cl-concat-log-rarity-9-210k-mod-datasets
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.2877 | 0.07 | 500 | 5.9527 |
| 5.0107 | 0.14 | 1000 | 5.5940 |
| 4.7383 | 0.21 | 1500 | 5.4130 |
| 4.5602 | 0.28 | 2000 | 5.2903 |
| 4.423 | 0.35 | 2500 | 5.2322 |
| 4.3129 | 0.41 | 3000 | 5.1696 |
| 4.2078 | 0.48 | 3500 | 5.1278 |
| 4.1161 | 0.55 | 4000 | 5.1007 |
| 4.023 | 0.62 | 4500 | 5.0613 |
| 3.933 | 0.69 | 5000 | 5.0483 |
| 3.8578 | 0.76 | 5500 | 5.0290 |
| 3.7859 | 0.83 | 6000 | 5.0156 |
| 3.746 | 0.9 | 6500 | 5.0064 |
| 3.7228 | 0.97 | 7000 | 5.0027 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nageen/roberta-finetuned-subjqa-event_model | nageen | 2023-07-04T10:05:57Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-29T22:46:41Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-event_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-event_model
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BOULLOUL/End2EndQGT5 | BOULLOUL | 2023-07-04T10:04:51Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wiselinjayajos/squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-04T09:49:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wiselinjayajos/squad_modified_for_t5_qg
widget:
- text: "generate question: Python is developed by Guido Van Rossum and released in 1991.</s>"
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad v1.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5879 | 0.34 | 100 | 1.9133 |
| 1.9688 | 0.68 | 200 | 1.7313 |
| 1.8513 | 1.02 | 300 | 1.6691 |
| 1.7459 | 1.36 | 400 | 1.6413 |
| 1.7206 | 1.69 | 500 | 1.6200 |
| 1.7026 | 2.03 | 600 | 1.6101 |
| 1.6447 | 2.37 | 700 | 1.5983 |
| 1.6402 | 2.71 | 800 | 1.5979 |
| 1.6332 | 3.05 | 900 | 1.5924 |
| 1.5953 | 3.39 | 1000 | 1.5877 |
| 1.5922 | 3.73 | 1100 | 1.5854 |
| 1.5832 | 4.07 | 1200 | 1.5830 |
| 1.5726 | 4.41 | 1300 | 1.5799 |
| 1.5587 | 4.75 | 1400 | 1.5789 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
vineetsharma/whisper-tiny-finetuned-minds14-en-v2 | vineetsharma | 2023-07-04T09:58:21Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-04T07:05:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-minds14-en-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.33530106257378983
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-minds14-en-v2
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6804
- Wer Ortho: 0.3362
- Wer: 0.3353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0014 | 1.79 | 50 | 0.6437 | 0.3708 | 0.3648 |
| 0.0012 | 3.57 | 100 | 0.6664 | 0.3461 | 0.3353 |
| 0.0113 | 5.36 | 150 | 0.6338 | 0.3374 | 0.3353 |
| 0.0021 | 7.14 | 200 | 0.6466 | 0.3467 | 0.3453 |
| 0.0013 | 8.93 | 250 | 0.6690 | 0.3399 | 0.3383 |
| 0.0006 | 10.71 | 300 | 0.6804 | 0.3362 | 0.3353 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jieshenai/zh_en_translation | jieshenai | 2023-07-04T09:43:08Z | 103 | 3 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"dataset:kde4",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-26T13:28:07Z | ---
datasets:
- kde4
---
translation zh to en.
example: https://github.com/JieShenAI/torch/blob/main/huggingface/example/translation/%E8%8B%B1%E6%B1%89%E4%BA%92%E8%AF%91.ipynb
You can post issues at https://github.com/JieShenAI/torch |
Bugsys0302/opchlr | Bugsys0302 | 2023-07-04T09:36:52Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-04T09:35:05Z | ---
license: creativeml-openrail-m
---
|
ymkgr/shikimiya_mana_from_Re_Stage | ymkgr | 2023-07-04T09:27:21Z | 0 | 1 | null | [
"anime",
"game",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-04T08:29:48Z | ---
license: creativeml-openrail-m
metrics:
- character
tags:
- anime
- game
---
模型类型/Model type: LoRA
---
v2.3版本模型详细信息/v2.3 Version Model Details(I used a translator in English):
- 来自 日本多媒体企划:Re:Stage! - 组合:KiRaRe - 角色名:式宫舞菜。/from Japanese multimedia project: Re:Stage! - Unit: KiRaRe - character name: shikimiya mana.
- LoRA权重/weight:0.6~1。
- 触发词/Trigger Words * 请自行在"("和")"的前面添加\符号,这个页面似乎不能将\符号与其它符号连在一起显示/Please add the \ symbol before "(" and ")" yourself. It seems that the Model card cannot display the \ symbol together with other symbols:
- 角色/character:
shikimiya mana\(re:stage!\), ahoge, short hair, orange hair, blue eyes, clover hairclip\(shikimiya mana\),
示例/Example:
- 舞台服/stage dress:
dress\(smsa\), star hair ornament\(smsa\), hat\(smsa\), one wrist cuffs\(smsa\), one wrist scrunchie\(smsa\), asymmetrical thighhighs\(smsa\), shoes\(smsa\), 
- 校服/school uniform:
sailor collar, blue pleated skirt, bowtie,
---
v2.3版本说明/v2.3 Version description:
- 它在不添加任何发饰类的提示词时,也可能会生成类似发饰的杂物,解决方法/It may also generate something similar to hair accessories without adding any hint words for hair accessories. Solution::
· 在 Negative prompt 中添加 hairclip、hair ornament 等发饰类提示词/Add hairclip, hair oment, and other hair accessory prompts to Negative prompt
· 降低LoRA权重/Reduce LoRA weight
相比v1版本,服饰方面更像。/Compared to the v1 Version, the clothing aspect is more similar.
---
I don't know English and I'm not very good at using the Hugging Face website. I also use a translation for the description
Please comply with regulations. |
ak2704/q-FrozenLake-v1-4x4-noSlippery | ak2704 | 2023-07-04T09:24:35Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-04T09:24:29Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ak2704/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
a2ran/kor_chatGLM | a2ran | 2023-07-04T09:21:16Z | 3 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-04T09:15:50Z | ---
library_name: peft
---
- **WIP**
Data used : https://raw.githubusercontent.com/Beomi/KoAlpaca/main/alpaca_data.json
training_args = TrainingArguments(
"output",
fp16 =True,
gradient_accumulation_steps=1,
per_device_train_batch_size = 1,
learning_rate = 1e-4,
max_steps=3000,
logging_steps=100,
remove_unused_columns=False,
seed=0,
data_seed=0,
group_by_length=False,
) |
Word2vec/nlpl_5 | Word2vec | 2023-07-04T09:20:25Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | 2023-06-01T15:35:34Z | ---
language: eng
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
license: cc-by-4.0
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 302866 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_5", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is:
http://vectors.nlpl.eu/repository/20/5.zip |
DEplain/trimmed_longmbart_docs_apa | DEplain | 2023-07-04T09:18:27Z | 85 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"text simplification",
"plain language",
"easy-to-read language",
"document simplification",
"de",
"dataset:DEplain/DEplain-APA-doc",
"arxiv:2305.18939",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text2text-generation | 2023-03-02T16:39:31Z | ---
inference: false
license: apache-2.0
language:
- de
datasets:
- DEplain/DEplain-APA-doc
metrics:
- sari
- bleu
- bertscore
library_name: transformers
pipeline_tag: text2text-generation
tags:
- text simplification
- plain language
- easy-to-read language
- document simplification
---
# DEplain German Text Simplification
This model belongs to the experiments done at the work of Stodden, Momen, Kallmeyer (2023). ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939) In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics.
Detailed documentation can be found on this GitHub repository [https://github.com/rstodden/DEPlain](https://github.com/rstodden/DEPlain)
We reused the codes from [https://github.com/a-rios/ats-models](https://github.com/a-rios/ats-models) to do our experiments.
### Model Description
The model is a finetuned checkpoint of the pre-trained LongmBART model based on `mbart-large-cc25`. With a trimmed vocabulary to the most frequent 30k words in the German language.
The model was finetuned towards the task of German text simplification of documents.
The finetuning dataset included manually aligned sentences from the datasets `DEplain-APA-doc` only.
### Model Usage
This model can't be used in the HuggingFace interface or via the .from_pretrained method currently. As it's a finetuning of a custom model (LongMBart), which hasn't been registered on HF yet.
You can find this custom model codes at: [https://github.com/a-rios/ats-models](https://github.com/a-rios/ats-models)
To test this model checkpoint, you need to clone the checkpoint repository as follows:
```
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/DEplain/trimmed_longmbart_docs_apa
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
Then set up the conda environment via:
```
conda env create -f environment.yaml
```
Then follow the procedure in the notebook `generation.ipynb`. |
Word2vec/nlpl_3 | Word2vec | 2023-07-04T09:08:44Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_February_2017",
"license:cc-by-4.0",
"region:us"
] | null | 2023-06-01T15:13:39Z | ---
language: eng
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_February_2017
license: cc-by-4.0
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 296630 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_3", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is:
http://vectors.nlpl.eu/repository/20/3.zip |
Word2vec/nlpl_4 | Word2vec | 2023-07-04T09:08:15Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:Gigaword_5th_Edition",
"license:cc-by-4.0",
"region:us"
] | null | 2023-06-01T15:14:52Z | ---
language: eng
tags:
- word2vec
datasets: Gigaword_5th_Edition
license: cc-by-4.0
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 314815 corresponding to 4815382730 tokens from the dataset `Gigaword_5th_Edition`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 2 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_4", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is:
http://vectors.nlpl.eu/repository/20/4.zip |
Roy029/mt5_empty_desc_25k_msp | Roy029 | 2023-07-04T09:07:41Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-04T08:50:19Z | 下から語彙を2500入れ替えたTokenizerと、mspで学習させたモデル |
Word2vec/nlpl_2 | Word2vec | 2023-07-04T09:06:54Z | 0 | 1 | null | [
"word2vec",
"nor",
"dataset:Norsk_Aviskorpus/NoWaC",
"license:cc-by-4.0",
"region:us"
] | null | 2023-06-01T15:11:33Z | ---
language: nor
tags:
- word2vec
datasets: Norsk_Aviskorpus/NoWaC
license: cc-by-4.0
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 306943 corresponding to 1941761506 tokens from the dataset `Norsk_Aviskorpus/NoWaC`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_2", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is:
http://vectors.nlpl.eu/repository/20/2.zip |
natykov/swin-tiny-patch4-window7-224-finetuned-eurosat | natykov | 2023-07-04T09:01:46Z | 209 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-04T08:52:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5564
- Accuracy: 0.2861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5752 | 0.99 | 115 | 1.5699 | 0.2685 |
| 1.5519 | 2.0 | 231 | 1.5570 | 0.2866 |
| 1.5324 | 2.98 | 345 | 1.5564 | 0.2861 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
trieudemo11/bloomz-1b7_19_brand_w_cate | trieudemo11 | 2023-07-04T08:55:51Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-04T08:55:37Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
KPF/KPF-bert-cls2 | KPF | 2023-07-04T08:53:57Z | 169 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-04T07:48:09Z | # KPF-BERT-CLS2
- [빅카인즈랩](https://lab.bigkinds.or.kr/) 인사이드 메뉴의 지역뉴스에서 사용된 세분류 예측 모델이며 지역을 제외한 세분류 결과를 나타낸다.
- 사용 방법에 대한 안내 및 코드는 [KPF-bigkinds github](https://github.com/KPF-bigkinds/BIGKINDS-LAB/tree/main/KPF-BERT-CLS)에서 확인할 수 있습니다.
## 모델 소개
### KPF-BERT-CLS
한국언론진흥재단이 개발한 kpf-BERT 모델을 기반으로 CLS(Classification) task를 수행할 수 있는 kpf-BERT-cls 모델을 설계 및 개발하였다.
- 본 예제에 사용된 kpf-BERT는 [kpfBERT](https://github.com/KPFBERT/kpfbert)에 공개되어 있다.
- 본 예제에서는 대분류, 지역을 제외한 대분류들의 세분류, 지역 세분류로 구분하여 데이터를 학습한다.
학습데이터는 기사내용과 분류명을 넣어 제작하였다. 분류명은 아래의 분류체계를 따르며, 기사내용 + 대분류(지역제외) 데이터셋, 기사내용 + 세분류(지역제외) 데이터셋, 기사내용 + 지역세분류 데이터셋으로 나누어 학습을 진행했다.

한국언론진흥재단이 개발한 kpf-BERT를 기반으로 classification layer를 추가하여 kpf-BERT-cls 모델을 개발한다. kpf-BERT-cls 모델은 기사를 입력받아 kpf-BERT 토크나이저를 사용하여 해당 기사가 어느 클래스에 속하는지 예측한다.
기본 BERT 모델의 구조와 토크나이저는 아래의 그림과 같다.


BERT는 입력 길이의 제한으로 512 subword 이하의 값만 입력받을 수 있다. 기사의 특성상 인터뷰 등의 글은 512 subword보다 긴 것이 대부분이다. 이를 해결하기 위해 본 과제에서는 stride를 주어 독립적으로 문서의 조각들을 처리한다.

kpf-BERT-cls는 대분류 예측 모델, 세분류 예측 모델, 지역 세분류 예측 모델로 구성되어 있다. 대분류/세분류 예측 모델은 top-3 결과를 출력한다.

|
Subsets and Splits