modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
camus-ng/dreambooth_lora_cory_v15_ten | camus-ng | 2023-07-08T19:43:42Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-08T16:25:04Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of <ntvc> man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - camus-ng/dreambooth_lora_cory_v15_ten
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of <ntvc> man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
Word2vec/wikipedia2vec_enwiki_20180420_nolg_500d | Word2vec | 2023-07-08T19:22:12Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T15:07:46Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_nolg_500d", filename="enwiki_20180420_nolg_500d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
hugfacerhaha/Reinforce-1 | hugfacerhaha | 2023-07-08T19:16:54Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T19:16:46Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 144.70 +/- 6.23
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jncraton/codet5p-220m-py-ct2-int8 | jncraton | 2023-07-08T19:12:46Z | 669 | 1 | transformers | [
"transformers",
"arxiv:2305.07922",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2023-06-30T18:48:16Z | ---
license: bsd-3-clause
---
# CodeT5+ 220M (further tuned on Python)
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (i.e. InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
checkpoint = "Salesforce/codet5p-220m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# ==> print('Hello World!')
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is first trained on the multilingual unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation. Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
Specifically for this checkpoint, it achieves 12.0% pass@1 on HumanEval in the zero-shot setting, which outperforms much larger LLMs such as Incoder 1.3B’s 8.9%, GPT-Neo 2.7B's 6.4%, and GPT-J 6B's 11.6%.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` |
Word2vec/wikipedia2vec_enwiki_20180420_nolg_300d | Word2vec | 2023-07-08T19:06:31Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T13:48:27Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_nolg_300d", filename="enwiki_20180420_nolg_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
cagarraz/Reinforce-12356 | cagarraz | 2023-07-08T18:52:35Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:48:43Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-12356
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 6.30 +/- 8.82
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tyavika/Distil-CNN512LSTM256NoBi | tyavika | 2023-07-08T18:48:27Z | 84 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-02T11:03:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Distil-CNN512LSTM256NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distil-CNN512LSTM256NoBi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6009 | 1.0 | 3290 | 1.2927 |
| 1.0288 | 2.0 | 6580 | 1.1467 |
| 0.7497 | 3.0 | 9870 | 1.1902 |
| 0.5288 | 4.0 | 13160 | 1.3388 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NERO500/q-FrozenLake-v1-4x4-noSlippery | NERO500 | 2023-07-08T18:39:12Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:39:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NERO500/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Word2vec/wikipedia2vec_jawiki_20180420_300d | Word2vec | 2023-07-08T18:31:54Z | 0 | 1 | null | [
"word2vec",
"ja",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T08:53:12Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ja
---
## Information
Pretrained Word2vec in Japanese. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_jawiki_20180420_300d", filename="jawiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Word2vec/wikipedia2vec_arwiki_20180420_100d | Word2vec | 2023-07-08T18:29:53Z | 0 | 0 | null | [
"word2vec",
"ar",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T16:51:26Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ar
---
## Information
Pretrained Word2vec in Arabic. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_arwiki_20180420_100d", filename="arwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Mozart-coder/BERT_Dec-6_tokenized | Mozart-coder | 2023-07-08T18:20:58Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-10-25T05:03:42Z | ---
tags:
- generated_from_trainer
model-index:
- name: BERT_Dec-6_tokenized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_Dec-6_tokenized
This model is a fine-tuned version of [armheb/DNA_bert_6](https://huggingface.co/armheb/DNA_bert_6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0625 | 1.0 | 273 | 0.0376 |
| 0.039 | 2.0 | 546 | 0.0375 |
| 0.0385 | 3.0 | 819 | 0.0358 |
| 0.0375 | 4.0 | 1092 | 0.0380 |
| 0.0374 | 5.0 | 1365 | 0.0387 |
| 0.0358 | 6.0 | 1638 | 0.0378 |
| 0.0363 | 7.0 | 1911 | 0.0381 |
| 0.0373 | 8.0 | 2184 | 0.0377 |
| 0.0362 | 9.0 | 2457 | 0.0373 |
| 0.037 | 10.0 | 2730 | 0.0380 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Word2vec/wikipedia2vec_frwiki_20180420_300d | Word2vec | 2023-07-08T18:13:55Z | 0 | 0 | null | [
"word2vec",
"fr",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T09:15:49Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- fr
---
## Information
Pretrained Word2vec in French. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_frwiki_20180420_300d", filename="frwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Word2vec/wikipedia2vec_ruwiki_20180420_300d | Word2vec | 2023-07-08T18:06:08Z | 0 | 0 | null | [
"word2vec",
"ru",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T08:51:48Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ru
---
## Information
Pretrained Word2vec in Russian. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_ruwiki_20180420_300d", filename="ruwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
TomyAI/Slider | TomyAI | 2023-07-08T18:05:08Z | 0 | 9 | null | [
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T17:49:31Z | ---
language:
- ja
thumbnail: NamedDiapers_1.png
license: creativeml-openrail-m
---
乳首の色、サイズ、おっぱいの寄り具合、高さを調整するスライダーLoRAです。
個別にダウンロードするかOppaiSliderPack.zipをダウンロードして解凍してください。
 |
skrl/IsaacGymEnvs-Ant-PPO | skrl | 2023-07-08T18:04:35Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T21:15:29Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 5094.76 +/- 310.06
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Ant
type: IsaacGymEnvs-Ant
---
<!-- ---
torch: 4996.72 +/- 273.16
jax: 5094.76 +/- 310.06
numpy: 4542.73 +/- 467.69
--- -->
# IsaacGymEnvs-Ant-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Ant
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Ant-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Ant-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 16 # memory_size
cfg["learning_epochs"] = 4
cfg["mini_batches"] = 2 # 16 * 4096 / 32768
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.01
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
TomyAI/NamedDiapers | TomyAI | 2023-07-08T18:00:45Z | 0 | 4 | null | [
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-29T08:11:24Z | ---
language:
- ja
thumbnail: NamedDiapers_1.png
license: creativeml-openrail-m
---
大人用(重要)おむつのLoraです。
ピーキーな出来になってしまったので作り直している最中ですが一旦公開します。
タグ:
- diaper
- babycuties
- bunnyhopps
- littlekings
- princesspink
- oyasumiman
 |
Word2vec/wikipedia2vec_ruwiki_20180420_100d | Word2vec | 2023-07-08T17:51:41Z | 0 | 0 | null | [
"word2vec",
"ru",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:00:45Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ru
---
## Information
Pretrained Word2vec in Russian. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_ruwiki_20180420_100d", filename="ruwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Word2vec/wikipedia2vec_eswiki_20180420_100d | Word2vec | 2023-07-08T17:48:37Z | 0 | 1 | null | [
"word2vec",
"es",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:02:04Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- es
---
## Information
Pretrained Word2vec in Spanish. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_eswiki_20180420_100d", filename="eswiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
IDEA-CCNL/Erlangshen-ZEN2-668M-Chinese | IDEA-CCNL | 2023-07-08T17:48:08Z | 58 | 0 | transformers | [
"transformers",
"pytorch",
"ZEN",
"chinese",
"zh",
"arxiv:2105.01279",
"arxiv:2209.02970",
"license:apache-2.0",
"region:us"
] | null | 2022-07-27T09:28:55Z | ---
language:
- zh
license: apache-2.0
tags:
- ZEN
- chinese
inference: false
---
# Erlangshen-ZEN2-668M-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
善于处理NLU任务,使用了N-gram编码增强文本语义,6.68亿参数量的ZEN2
ZEN2 model, which uses N-gram to enhance text semantic and has 668M parameters, is adept at NLU tasks.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | ZEN2 | 668M | 中文-Chinese |
## 模型信息 Model Information
我们与[ZEN团队](https://github.com/sinovation/ZEN2)合作,使用我们的封神框架,开源发布了ZEN2模型。具体而言,通过引入无监督学习中提取的知识,ZEN通过N-gram方法学习不同的文本粒度信息。ZEN2使用大规模数据集和特殊的预训练策略对N-gram增强编码器进行预训练。下一步,我们将继续与ZEN团队一起探索PLM的优化,并提高下游任务的性能。
We open source and publicly release ZEN2 using our Fengshen Framework in collaboration with the [ZEN team](https://github.com/sinovation/ZEN2). More precisely, by bringing together knowledge extracted by unsupervised learning, ZEN learns different textual granularity information through N-gram methods. ZEN2 pre-trains the N-gram-enhanced encoders with large-scale datasets and special pre-training strategies. In the next step, we continue with the ZEN team to explore the optimization of PLM and improve the performance on downstream tasks.
### 下游效果 Performance
**分类任务 Classification**
| Model(Acc) | afqmc | tnews | iflytek | ocnli | cmnli |
| :--------: | :-----: | :----: | :-----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 0.741 | 0.584 | 0.599 | 0.788 | 0.80 |
| Erlangshen-ZEN2-668M-Chinese | 0.75 | 0.60 | 0.589 | 0.81 | 0.82 |
**抽取任务 Extraction**
| Model(F1) | WEIBO(test) | Resume(test) | MSRA(test) | OntoNote4.0(test) | CMeEE(dev) | CLUENER(dev) |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
| Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有ZEN2相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of ZEN2 in [transformers library](https://github.com/huggingface/transformers), you can find the structure of ZEN2 and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
```python
from fengshen.models.zen2.ngram_utils import ZenNgramDict
from fengshen.models.zen2.tokenization import BertTokenizer
from fengshen.models.zen2.modeling import ZenForSequenceClassification, ZenForTokenClassification
pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-668M-Chinese'
tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model_classification = ZenForSequenceClassification.from_pretrained(pretrain_path)
model_extraction = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
```
你可以从下方的链接获得我们做分类和抽取的详细示例。
You can get classification and extraction examples below.
[分类 classification example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/fs_zen2_large_tnews.sh)
[抽取 extraction example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/ner_zen2_large_ontonotes4.sh)
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的对该模型的论文:
If you are using the resource for your work, please cite the our paper for this model:
```text
@article{Sinovation2021ZEN2,
title="{ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders}",
author={Yan Song, Tong Zhang, Yonggang Wang, Kai-Fu Lee},
journal={arXiv preprint arXiv:2105.01279},
year={2021},
}
```
如果您在您的工作中使用了我们的模型,也可以引用我们的[总论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [overview paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese | IDEA-CCNL | 2023-07-08T17:47:20Z | 53 | 0 | transformers | [
"transformers",
"pytorch",
"ZEN",
"chinese",
"zh",
"arxiv:2105.01279",
"arxiv:2209.02970",
"license:apache-2.0",
"region:us"
] | null | 2022-07-27T06:13:11Z | ---
language:
- zh
license: apache-2.0
tags:
- ZEN
- chinese
inference: false
---
# Erlangshen-ZEN2-345M-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
善于处理NLU任务,使用了N-gram编码增强文本语义,3.45亿参数量的ZEN2
ZEN2 model, which uses N-gram to enhance text semantic and has 345M parameters, is adept at NLU tasks.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | ZEN2 | 345M | 中文-Chinese |
## 模型信息 Model Information
我们与[ZEN团队](https://github.com/sinovation/ZEN2)合作,使用我们的封神框架,开源发布了ZEN2模型。具体而言,通过引入无监督学习中提取的知识,ZEN通过N-gram方法学习不同的文本粒度信息。ZEN2使用大规模数据集和特殊的预训练策略对N-gram增强编码器进行预训练。下一步,我们将继续与ZEN团队一起探索PLM的优化,并提高下游任务的性能。
We open source and publicly release ZEN2 using our Fengshen Framework in collaboration with the [ZEN team](https://github.com/sinovation/ZEN2). More precisely, by bringing together knowledge extracted by unsupervised learning, ZEN learns different textual granularity information through N-gram methods. ZEN2 pre-trains the N-gram-enhanced encoders with large-scale datasets and special pre-training strategies. In the next step, we continue with the ZEN team to explore the optimization of PLM and improve the performance on downstream tasks.
### 下游效果 Performance
**分类任务 Classification**
| Model(Acc) | afqmc | tnews | iflytek | ocnli | cmnli |
| :--------: | :-----: | :----: | :-----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 0.741 | 0.584 | 0.599 | 0.788 | 0.80 |
| Erlangshen-ZEN2-668M-Chinese | 0.75 | 0.60 | 0.589 | 0.81 | 0.82 |
**抽取任务 Extraction**
| Model(F1) | WEIBO(test) | Resume(test) | MSRA(test) | OntoNote4.0(test) | CMeEE(dev) | CLUENER(dev) |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
| Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有ZEN2相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of ZEN2 in [transformers library](https://github.com/huggingface/transformers), you can find the structure of ZEN2 and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
```python
from fengshen.models.zen2.ngram_utils import ZenNgramDict
from fengshen.models.zen2.tokenization import BertTokenizer
from fengshen.models.zen2.modeling import ZenForSequenceClassification, ZenForTokenClassification
pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese'
tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model_classification = ZenForSequenceClassification.from_pretrained(pretrain_path)
model_extraction = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
```
你可以从下方的链接获得我们做分类和抽取的详细示例。
You can get classification and extraction examples below.
[分类 classification example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/fs_zen2_base_tnews.sh)
[抽取 extraction example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/ner_zen2_base_ontonotes4.sh)
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的对该模型的论文:
If you are using the resource for your work, please cite the our paper for this model:
```text
@article{Sinovation2021ZEN2,
title="{ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders}",
author={Yan Song, Tong Zhang, Yonggang Wang, Kai-Fu Lee},
journal={arXiv preprint arXiv:2105.01279},
year={2021},
}
```
如果您在您的工作中使用了我们的模型,也可以引用我们的[总论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [overview paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
Teunis89/q-FrozenLake-v1-4x4-noSlippery | Teunis89 | 2023-07-08T17:45:45Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T17:45:43Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Teunis89/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
spitfire4794/dialogpt-small-rick | spitfire4794 | 2023-07-08T17:42:12Z | 139 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T11:42:03Z | ---
language:
- en
library_name: transformers
pipeline_tag: conversational
tags:
- gpt2
- pytorch
--- |
mark-oppenheim/rl-course-unit1-lunar-lander | mark-oppenheim | 2023-07-08T17:09:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T17:08:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.14 +/- 23.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
skrl/IsaacGymEnvs-Cartpole-PPO | skrl | 2023-07-08T17:03:59Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T20:42:39Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 493.73 +/- 0.58
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Cartpole
type: IsaacGymEnvs-Cartpole
---
<!-- ---
torch: 493.73 +/- 0.58
jax: 492.06 +/- 3.58
numpy: 491.92 +/- 0.57
--- -->
# IsaacGymEnvs-Cartpole-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Cartpole
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Cartpole-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Cartpole-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 16 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 1 # 16 * 512 / 8192
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 2.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.1
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
OumaElha/Speech12 | OumaElha | 2023-07-08T16:25:52Z | 81 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-03T23:53:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Speech12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Speech12
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0053
- Wer: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.8827 | 3.96 | 1000 | 2.8766 | 1 |
| 2.8369 | 7.92 | 2000 | 2.8362 | 1 |
| 1.6725 | 11.88 | 3000 | 1.4849 | 1 |
| 1.2083 | 15.84 | 4000 | 1.0574 | 1 |
| 1.1507 | 19.8 | 5000 | 1.0053 | 1 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bpw1621/Taxi-v3 | bpw1621 | 2023-07-08T16:10:00Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T16:09:58Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bpw1621/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kingfisher/distilhubert-finetuned-gtzan | kingfisher | 2023-07-08T16:09:47Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-08T14:53:24Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5682
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9901 | 1.0 | 113 | 1.8557 | 0.38 |
| 1.3154 | 2.0 | 226 | 1.2377 | 0.64 |
| 1.0642 | 3.0 | 339 | 0.9214 | 0.75 |
| 0.8612 | 4.0 | 452 | 0.8952 | 0.7 |
| 0.5882 | 5.0 | 565 | 0.6712 | 0.79 |
| 0.3713 | 6.0 | 678 | 0.5890 | 0.81 |
| 0.3766 | 7.0 | 791 | 0.5723 | 0.82 |
| 0.1535 | 8.0 | 904 | 0.5387 | 0.84 |
| 0.1171 | 9.0 | 1017 | 0.5186 | 0.86 |
| 0.1696 | 10.0 | 1130 | 0.5682 | 0.83 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HeinrichWirth/ppo-LunarLander-v2 | HeinrichWirth | 2023-07-08T16:09:18Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T16:08:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.99 +/- 18.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
agercas/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan | agercas | 2023-07-08T15:42:55Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-08T14:53:43Z | ---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3847
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1646 | 1.0 | 225 | 0.7426 | 0.74 |
| 0.6839 | 2.0 | 450 | 0.7655 | 0.78 |
| 0.4179 | 3.0 | 675 | 0.8210 | 0.81 |
| 0.1836 | 4.0 | 900 | 0.3845 | 0.86 |
| 0.0018 | 5.0 | 1125 | 0.4368 | 0.87 |
| 0.0032 | 6.0 | 1350 | 0.4066 | 0.9 |
| 0.0001 | 7.0 | 1575 | 0.4524 | 0.89 |
| 0.0 | 8.0 | 1800 | 0.3708 | 0.9 |
| 0.0 | 9.0 | 2025 | 0.3975 | 0.9 |
| 0.0001 | 10.0 | 2250 | 0.3847 | 0.91 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RobertoFont/falcon-7b-chat-oasst1 | RobertoFont | 2023-07-08T15:40:27Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T15:40:22Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
choozmo/choozmomic | choozmo | 2023-07-08T15:38:56Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-08T15:38:52Z | ---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: choozmomic
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - choozmomic
These are LoRA adaption weights for [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). The weights were trained on the instance prompt "choozmomic" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
ayush-vatsal/caption_qlora_finetune | ayush-vatsal | 2023-07-08T15:31:53Z | 4 | 0 | peft | [
"peft",
"dataset:ayush-vatsal/description_to_caption",
"region:us"
] | null | 2023-07-07T17:56:44Z | ---
library_name: peft
datasets:
- ayush-vatsal/description_to_caption
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0 |
CeroShrijver/chinese-lert-large-ling-cls | CeroShrijver | 2023-07-08T15:30:51Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-08T14:20:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: chinese-lert-large-ling-cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-lert-large-ling-cls
This model is a fine-tuned version of [hfl/chinese-lert-large](https://huggingface.co/hfl/chinese-lert-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4531
- Accuracy: 0.7822
- Test Accuracy: 0.8102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5082 | 1.0 | 1008 | 0.5476 | 0.7601 |
| 0.3669 | 2.0 | 2017 | 0.5202 | 0.7978 |
| 0.2006 | 3.0 | 3025 | 0.8294 | 0.7748 |
| 0.0954 | 4.0 | 4034 | 1.2630 | 0.7931 |
| 0.0447 | 5.0 | 5040 | 1.4531 | 0.7822 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.6
|
peterwilli/audio-maister | peterwilli | 2023-07-08T15:11:48Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-06-12T20:20:50Z | ---
license: openrail
---
# Audiomaister weights
See main repo: https://github.com/peterwilli/audio-maister |
philipmertz/alpaca-lora-pokemon | philipmertz | 2023-07-08T15:11:28Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-07T19:26:02Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
AACEE/textual_inversion_airship | AACEE | 2023-07-08T15:03:05Z | 42 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-depth",
"base_model:adapter:stabilityai/stable-diffusion-2-depth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-08T11:45:59Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-depth
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - AACEE/textual_inversion_airship
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-depth. You can find some example images in the following.
|
manueltonneau/clinicalcovid-bert-base-cased | manueltonneau | 2023-07-08T15:02:24Z | 161 | 1 | transformers | [
"transformers",
"pytorch",
"en",
"doi:10.57967/hf/0867",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
---
All information can be found here: https://github.com/manueltonneau/covid-berts
If you find this model useful, please cite:
```
@misc {manuel_tonneau_2023,
author = { {Manuel Tonneau} },
title = { clinicalcovid-bert-base-cased (Revision feaed90) },
year = 2023,
url = { https://huggingface.co/manueltonneau/clinicalcovid-bert-base-cased },
doi = { 10.57967/hf/0867 },
publisher = { Hugging Face }
}
``` |
FelixChao/medical_faq_falcon7b_gpt | FelixChao | 2023-07-08T14:49:35Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T14:48:12Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
C-guy/Model-V | C-guy | 2023-07-08T14:49:33Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-07T17:00:25Z | ---
license: creativeml-openrail-m
---
|
RogerB/afriberta_base-finetuned-kintweetsC | RogerB | 2023-07-08T14:40:28Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T14:28:03Z | ---
tags:
- generated_from_trainer
model-index:
- name: afriberta_base-finetuned-kintweetsC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta_base-finetuned-kintweetsC
This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4096 | 1.0 | 900 | 4.1336 |
| 4.1389 | 2.0 | 1800 | 3.9637 |
| 4.0421 | 3.0 | 2700 | 4.0400 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tyavika/LR1E5-BS8-Distilbert-QA-Pytorch-FULL | tyavika | 2023-07-08T14:39:57Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-08T12:07:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E5-BS8-Distilbert-QA-Pytorch-FULL.pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E5-BS8-Distilbert-QA-Pytorch-FULL.pt
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3783 | 1.0 | 6580 | 1.2680 |
| 1.1465 | 2.0 | 13160 | 1.1625 |
| 0.8655 | 3.0 | 19740 | 1.1681 |
| 0.7235 | 4.0 | 26320 | 1.2312 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RogerB/afriberta_large-finetuned-kintweetsC | RogerB | 2023-07-08T14:26:56Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T14:13:13Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afriberta_large-finetuned-kintweetsC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta_large-finetuned-kintweetsC
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3534 | 1.0 | 900 | 4.0667 |
| 4.0818 | 2.0 | 1800 | 3.9280 |
| 3.9884 | 3.0 | 2700 | 3.9982 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bpw1621/ppo-Huggy | bpw1621 | 2023-07-08T14:10:30Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-08T14:10:20Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bpw1621/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sled-umich/OctoBERT | sled-umich | 2023-07-08T13:51:05Z | 0 | 0 | null | [
"arxiv:2306.08685",
"region:us"
] | null | 2023-07-07T08:25:33Z | Weights for the pretrained OctoBERT model.
[Model Demo](https://huggingface.co/spaces/sled-umich/OctoBERT-flickr-demo) • [Paper](https://arxiv.org/abs/2306.08685)
[Ziqiao Ma](https://mars-tin.github.io/)\*, [Jiayi Pan](https://www.jiayipan.me/)\*, [Joyce Chai](https://web.eecs.umich.edu/~chaijy/) (\* denotes equal contribution) |
openchat/openchat_v2 | openchat | 2023-07-08T13:51:04Z | 1,485 | 12 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-07T15:30:54Z | ---
language:
- en
tags:
- llama
license: other
---
# OpenChat: Advancing Open-source Language Models with Imperfect Data
The OpenChat v2 family is inspired by offline reinforcement learning, including conditional behavior cloning (OpenChat-v2) and weighted behavior cloning (OpenChat-v2-w).
- **[OpenChat-v2-w](https://huggingface.co/openchat/openchat_v2_w)**: ~80k cleaned ShareGPT data with conditioning and weighted loss, based on LLaMA-13B with a context length of 2048.
- Achieves **50.9%** win-rate over ChatGPT on MT-bench.
- Achieves **79.4%** win-rate over ChatGPT on Vicuna-bench.
- Achieves **87.1%** win-rate over text-davinci-003 on AlpacaEval.
- **[OpenChat-v2](https://huggingface.co/openchat/openchat_v2)**: ~80k cleaned ShareGPT data with only conditioning, based on LLaMA-13B with a context length of 2048.
- Achieves **48.1%** win-rate over ChatGPT on MT-bench.
- Achieves **80.6%** win-rate over ChatGPT on Vicuna-bench.
- Achieves **85.0%** win-rate over text-davinci-003 on AlpacaEval.
## Code and Inference Server
We provide the full source code, including an inference server compatible with the "ChatCompletions" API, in the [OpenChat](https://github.com/imoneoi/openchat) GitHub repository.
## Web UI
OpenChat also includes a web UI for a better user experience. See the GitHub repository for instructions.
## Conversation Template
The conversation template **involves concatenating tokens**, and cannot be expressed in plain-text.
Besides base model vocabulary, an end-of-turn token `<|end_of_turn|>` is added.
Here is an example of single-round conversation template:
```python
def tokenize_single_input(tokenizer, prompt):
# OpenChat V2
human_prefix = "User:"
prefix = "Assistant GPT4:"
eot_token = "<|end_of_turn|>"
bos_token = "<s>"
def _tokenize(text):
return tokenizer.convert_tokens_to_ids(tokenizer._tokenize(text))
def _tokenize_special(special_name):
return tokenizer.convert_tokens_to_ids(special_name)
return [_tokenize_special(bos_token)] + _tokenize(human_prefix) + _tokenize(prompt) + [_tokenize_special(eot_token)] + \
_tokenize(prefix)
```
To explore conditional language models, you can also set `prefix = "Assistant GPT3:"` to mimic ChatGPT behavior (this may cause performance degradation).
*Hint: In BPE, `tokenize(A) + tokenize(B)` does not always equals to `tokenize(A + B)`*
## Limitations
**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
- Complex reasoning
- Mathematical and arithmetic tasks
- Programming and coding challenges
**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
|
Cookieszz/Xiaoz | Cookieszz | 2023-07-08T13:39:19Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-08T13:39:19Z | ---
license: bigscience-openrail-m
---
|
mitra-mir/setfit_model_indepandance_epochs2 | mitra-mir | 2023-07-08T13:37:16Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-07T14:46:27Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 20 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
RogerB/afro-xlmr-large-finetuned-kintweetsC | RogerB | 2023-07-08T13:28:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T12:35:48Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-finetuned-kintweetsC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-finetuned-kintweetsC
This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xlmr-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6129 | 1.0 | 3000 | 2.3303 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Macosrun/Macrown | Macosrun | 2023-07-08T13:26:14Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-08T13:26:13Z | ---
license: bigscience-openrail-m
---
|
swl-models/ShyakuJXMix-QianJiePoSuo | swl-models | 2023-07-08T13:11:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T13:07:47Z | ---
license: creativeml-openrail-m
---
|
Falah/stable_diffusion_prompts_gen | Falah | 2023-07-08T13:05:30Z | 21 | 3 | diffusers | [
"diffusers",
"pytorch",
"gpt2",
"art",
"stable diffusion",
"text-generation",
"en",
"dataset:Falah/stable_diffusion_prompts_dataset",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-07-08T09:30:49Z | ---
license: apache-2.0
datasets:
- Falah/stable_diffusion_prompts_dataset
language:
- en
metrics:
- accuracy
library_name: diffusers
pipeline_tag: text-generation
tags:
- art
- stable diffusion
- gpt2
---
# Stable Diffusion Prompts Generation Model
This model is designed for generating illustration art style prompts for the Stable Diffusion tool for text-to-image generation.
It utilizes the custom dataset "Falah/stable_diffusion_prompts_dataset" to generate creative and coherent text prompts.
## Examples
To load the model and generate inferences using the model, you can use the following code snippet:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Falah/stable_diffusion_prompts_gen"
dataset_name = "Falah/stable_diffusion_prompts_dataset"
prompt = r'a beautiful female' # the beginning of the prompt
temperature = 0.9 # A higher temperature will produce more diverse results, but with a higher risk of less coherent text
top_k = 8 # the number of tokens to sample from at each step
max_length = 200 # the maximum number of tokens for the output of the model
repetition_penalty = 1.2 # the penalty value for each repetition of a token
num_return_sequences = 5 # the number of results to generate
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
output = model.generate(
input_ids,
do_sample=True,
temperature=temperature,
top_k=top_k,
max_length=max_length,
num_return_sequences=num_return_sequences,
repetition_penalty=repetition_penalty,
early_stopping=True
)
print('\033[96m' + prompt + '\033[0m')
for i in range(len(output)):
print(tokenizer.decode(output[i], skip_special_tokens=True) + '\n')
```
These are examples of prompts generating and testing the model with the website [STABLE DIFFUSION XL](https://clipdrop.co/) for the stable diffusion model
generating images from prompts
```
a beautiful female
a beautiful female woman, and she's got the best hair in this world. I'm not saying her look is bad (I think it has to be), but my point was that when one looks at these things like we're all looking for something different about our bodies as individuals they are completely wrong; there isn't anything inherently evil with being an animal or having two legs instead of just walking on both sides of you while holding your other leg up so tightly around yourself - no matter how
```

## another generating prompts
```
a beautiful female and she's been in the business for over 30 years.
I've had my fair share of bad things, and I'm sure many more will befall me at some point as well… but it is one thing when you have such an incredible woman on your team that makes life so difficult to bear (aside from being very much human) while also having her back with no regard whatsoever towards any personal issues or even just trying desperately hard not too far away! And
```

--------------

Feel free to modify the parameters like `prompt`, `temperature`, `top_k`, etc., to experiment with different outputs.
## Citation
If you use this model or the associated dataset in your research or projects, please cite it as follows:
```
@sd_prompts{stable_diffussion_prompts_generating_gpt2),
author = {Falah.G.Salieh},
title = {Stable Diffusion Prompts Generating By fine-tuning GPT2 },
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/Falah/stable_diffusion_prompts_gen},
}
```
## License
This project is licensed under the Apache License, Version 2.0. Please see the [LICENSE](link-to-license-file) file for more details. |
swl-models/AingDiffusion-v2.5 | swl-models | 2023-07-08T13:05:09Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T13:01:20Z | ---
license: creativeml-openrail-m
---
|
swl-models/NullStyle-v1.0 | swl-models | 2023-07-08T13:03:43Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T15:34:25Z | ---
license: creativeml-openrail-m
---
|
swl-models/AingDiffusion-v4.0 | swl-models | 2023-07-08T13:00:25Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T12:57:33Z | ---
license: creativeml-openrail-m
---
|
swl-models/AingDiffusion-v6.0 | swl-models | 2023-07-08T12:55:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T12:52:33Z | ---
license: creativeml-openrail-m
---
|
J3/whisper-tiny-en-US | J3 | 2023-07-08T12:54:23Z | 86 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-07T10:48:56Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper tiny en-US - J3v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14-en-US
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.33116883116883117
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper tiny en-US - J3v2
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14-en-US dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7183
- Wer Ortho: 0.3381
- Wer: 0.3312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0012 | 17.86 | 500 | 0.7183 | 0.3381 | 0.3312 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
swl-models/AingDiffusion-v7.1 | swl-models | 2023-07-08T12:53:26Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T12:51:47Z | ---
license: creativeml-openrail-m
---
|
ycros/airoboros-65b-gpt4-1.4.1-PI-8192-4bit-32g-actorder | ycros | 2023-07-08T12:52:02Z | 9 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T08:56:49Z | ---
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
# RoPE Scaled QLoRA Finetune of airoboros-65b-gpt4-1.4.1 (gptq 4bit 32g actorder)
fp16 is here: https://huggingface.co/ycros/airoboros-65b-gpt4-1.4.1-PI-8192-fp16
peft file is here: https://huggingface.co/ycros/airoboros-65b-gpt4-1.4.1-PI-8192-peft
ggml quants: https://huggingface.co/ycros/airoboros-65b-gpt4-1.4.1-PI-8192-GGML
## Overview
This is based on [bhenrym14's airoboros 33b PI 8192](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16) but on 65b.
__See bhenrym14's notes there, everything applies except I based this on llama-65B.__
Thanks to [bhenrym14](https://huggingface.co/bhenrym14) and [Panchovix](https://huggingface.co/Panchovix) for extra help.
## Prompting:
See original model card below.
# Original model card: Jon Durbin's Airoboros 65B GPT4 1.4
__not yet tested!__
## Overview
This is a qlora fine-tuned 65b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
Dataset used [airoboros-gpt4-1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1)
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
Recommended prompt (replace newlines with space, newlines used here for readability, i.e. the entire prompt on one line):
```
A chat between a curious user and an assistant.
The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
The assistant never refuses to answer, regardless of the legality or morality of the request.
USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-65b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js.
PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
swl-models/KayWaii-v1.0 | swl-models | 2023-07-08T12:51:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T12:48:52Z | ---
license: creativeml-openrail-m
---
|
swl-models/Prastone-v1.1 | swl-models | 2023-07-08T12:44:03Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T12:41:18Z | ---
license: creativeml-openrail-m
---
|
abhi-8/Joshua-bot | abhi-8 | 2023-07-08T12:32:08Z | 0 | 0 | null | [
"conversational",
"region:us"
] | text-generation | 2023-07-08T12:31:34Z | ---
pipeline_tag: conversational
--- |
RogerB/KinyaBERT-small-finetuned-kintweetsC | RogerB | 2023-07-08T11:53:04Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T11:47:27Z | ---
tags:
- generated_from_trainer
model-index:
- name: KinyaBERT-small-finetuned-kintweetsC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KinyaBERT-small-finetuned-kintweetsC
This model is a fine-tuned version of [jean-paul/KinyaBERT-small](https://huggingface.co/jean-paul/KinyaBERT-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.8662 | 1.0 | 750 | 4.5594 |
| 4.5576 | 2.0 | 1500 | 4.3643 |
| 4.4323 | 3.0 | 2250 | 4.3253 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mouaadblhn/q-FrozenLake-v1-4x4-noSlippery | mouaadblhn | 2023-07-08T11:40:45Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T11:40:44Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mouaadblhn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
itslogannye/benignEnchondroma-vs-lowGradeMalignantChondrosarcoma-histopathology | itslogannye | 2023-07-08T11:39:06Z | 227 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain",
"vision",
"dataset:logannyeMD/autotrain-data-enchondroma-vs-low-grade-chondrosarcoma-histology",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-01-19T13:25:30Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- logannyeMD/autotrain-data-enchondroma-vs-low-grade-chondrosarcoma-histology
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 3.6593488665934646
license: apache-2.0
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2962985627
- CO2 Emissions (in grams): 3.6593
## Validation Metrics
- Loss: 0.229
- Accuracy: 0.887
- Precision: 0.939
- Recall: 0.821
- AUC: 0.969
- F1: 0.876 |
jkraushaar/distilbert-base-uncased-finetuned-emotion | jkraushaar | 2023-07-08T11:31:58Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T18:05:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245071578761553
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2093
- Accuracy: 0.9245
- F1: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.2993 | 0.91 | 0.9084 |
| No log | 2.0 | 500 | 0.2093 | 0.9245 | 0.9245 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nopperl/alpaca-lora-7b-german-base-51k-ggml | nopperl | 2023-07-08T11:06:41Z | 7 | 5 | transformers | [
"transformers",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-10T22:54:33Z | ---
license: apache-2.0
---
<p align="center" width="100%">
<img src="https://huggingface.co/nopperl/alpaca-lora-7b-german-base-51k-ggml/raw/main/zicklein-ggml.jpg" alt="a lean, scrawny llama at the oktoberfest" style="width: 20%; min-width: 300px; display: block; margin: auto;">
</p>
# Zicklein-GGML
GGML conversion of [Zicklein](https://github.com/avocardio/zicklein) (a German [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) LoRa for [LLaMA](https://github.com/facebookresearch/llama)). Compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp) version master-2d43387 or later. See [Alpaca](https://github.com/tatsu-lab/stanford_alpaca#data-release) for instructions on how to prompt the model.
More information about the conversion process is in this [git repo](https://github.com/nopperl/Zicklein-GGML).
|
jayanta/microsoft-resnet-50-cartoon-emotion-detection | jayanta | 2023-07-08T11:03:28Z | 330 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-01-21T11:44:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: microsoft-resnet-50-cartoon-emotion-detection
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8165137614678899
- name: Precision
type: precision
value: 0.8181998512273742
- name: Recall
type: recall
value: 0.8165137614678899
- name: F1
type: f1
value: 0.8172526992448356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft-resnet-50-cartoon-emotion-detection
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4801
- Accuracy: 0.8165
- Precision: 0.8182
- Recall: 0.8165
- F1: 0.8173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 0.97 | 8 | 1.3855 | 0.2294 | 0.2697 | 0.2294 | 0.2165 |
| 1.4222 | 1.97 | 16 | 1.3792 | 0.2569 | 0.2808 | 0.2569 | 0.2543 |
| 1.4183 | 2.97 | 24 | 1.3646 | 0.3853 | 0.4102 | 0.3853 | 0.3511 |
| 1.4097 | 3.97 | 32 | 1.3563 | 0.4128 | 0.5062 | 0.4128 | 0.3245 |
| 1.3944 | 4.97 | 40 | 1.3462 | 0.4037 | 0.3927 | 0.4037 | 0.2939 |
| 1.3944 | 5.97 | 48 | 1.3223 | 0.4037 | 0.5152 | 0.4037 | 0.2841 |
| 1.411 | 6.97 | 56 | 1.3040 | 0.4128 | 0.4404 | 0.4128 | 0.2985 |
| 1.346 | 7.97 | 64 | 1.2700 | 0.4954 | 0.4960 | 0.4954 | 0.4093 |
| 1.3031 | 8.97 | 72 | 1.2150 | 0.5596 | 0.5440 | 0.5596 | 0.4672 |
| 1.2371 | 9.97 | 80 | 1.1580 | 0.5963 | 0.5659 | 0.5963 | 0.5101 |
| 1.2371 | 10.97 | 88 | 1.0670 | 0.6055 | 0.7279 | 0.6055 | 0.5211 |
| 1.1736 | 11.97 | 96 | 0.9856 | 0.6606 | 0.5537 | 0.6606 | 0.5772 |
| 1.0457 | 12.97 | 104 | 0.8963 | 0.6697 | 0.7631 | 0.6697 | 0.5965 |
| 0.953 | 13.97 | 112 | 0.8547 | 0.6697 | 0.6885 | 0.6697 | 0.6081 |
| 0.8579 | 14.97 | 120 | 0.7849 | 0.7156 | 0.7396 | 0.7156 | 0.6643 |
| 0.8579 | 15.97 | 128 | 0.7564 | 0.7431 | 0.7372 | 0.7431 | 0.7119 |
| 0.8167 | 16.97 | 136 | 0.7133 | 0.7615 | 0.7507 | 0.7615 | 0.7211 |
| 0.7273 | 17.97 | 144 | 0.6888 | 0.7523 | 0.7379 | 0.7523 | 0.7202 |
| 0.6547 | 18.97 | 152 | 0.6592 | 0.7798 | 0.7773 | 0.7798 | 0.7577 |
| 0.5963 | 19.97 | 160 | 0.6136 | 0.7706 | 0.7642 | 0.7706 | 0.7551 |
| 0.5963 | 20.97 | 168 | 0.5723 | 0.7890 | 0.7802 | 0.7890 | 0.7787 |
| 0.551 | 21.97 | 176 | 0.5686 | 0.7890 | 0.7761 | 0.7890 | 0.7781 |
| 0.4929 | 22.97 | 184 | 0.5597 | 0.7706 | 0.7649 | 0.7706 | 0.7651 |
| 0.4309 | 23.97 | 192 | 0.5234 | 0.7890 | 0.7774 | 0.7890 | 0.7810 |
| 0.3945 | 24.97 | 200 | 0.5008 | 0.7890 | 0.7837 | 0.7890 | 0.7813 |
| 0.3945 | 25.97 | 208 | 0.5289 | 0.7523 | 0.7537 | 0.7523 | 0.7529 |
| 0.3704 | 26.97 | 216 | 0.4399 | 0.7982 | 0.7958 | 0.7982 | 0.7963 |
| 0.3267 | 27.97 | 224 | 0.4539 | 0.8073 | 0.7983 | 0.8073 | 0.8005 |
| 0.2966 | 28.97 | 232 | 0.4735 | 0.7798 | 0.7892 | 0.7798 | 0.7837 |
| 0.2645 | 29.97 | 240 | 0.4594 | 0.7706 | 0.7706 | 0.7706 | 0.7706 |
| 0.2645 | 30.97 | 248 | 0.4699 | 0.7523 | 0.7554 | 0.7523 | 0.7533 |
| 0.2527 | 31.97 | 256 | 0.4551 | 0.7890 | 0.7856 | 0.7890 | 0.7857 |
| 0.2202 | 32.97 | 264 | 0.4458 | 0.8165 | 0.8198 | 0.8165 | 0.8170 |
| 0.2006 | 33.97 | 272 | 0.4632 | 0.7798 | 0.7941 | 0.7798 | 0.7850 |
| 0.1589 | 34.97 | 280 | 0.4651 | 0.7890 | 0.7993 | 0.7890 | 0.7925 |
| 0.1589 | 35.97 | 288 | 0.4595 | 0.7798 | 0.7824 | 0.7798 | 0.7804 |
| 0.153 | 36.97 | 296 | 0.4584 | 0.7615 | 0.7691 | 0.7615 | 0.7633 |
| 0.1427 | 37.97 | 304 | 0.4608 | 0.7798 | 0.7830 | 0.7798 | 0.7796 |
| 0.113 | 38.97 | 312 | 0.4571 | 0.7890 | 0.7922 | 0.7890 | 0.7899 |
| 0.1146 | 39.97 | 320 | 0.5270 | 0.7615 | 0.7651 | 0.7615 | 0.7613 |
| 0.1146 | 40.97 | 328 | 0.4888 | 0.7706 | 0.7782 | 0.7706 | 0.7710 |
| 0.1275 | 41.97 | 336 | 0.4523 | 0.7890 | 0.7809 | 0.7890 | 0.7837 |
| 0.0959 | 42.97 | 344 | 0.4697 | 0.7798 | 0.7753 | 0.7798 | 0.7767 |
| 0.0882 | 43.97 | 352 | 0.4286 | 0.7706 | 0.7686 | 0.7706 | 0.7686 |
| 0.0847 | 44.97 | 360 | 0.5317 | 0.7890 | 0.7993 | 0.7890 | 0.7925 |
| 0.0847 | 45.97 | 368 | 0.5431 | 0.7615 | 0.7700 | 0.7615 | 0.7647 |
| 0.0813 | 46.97 | 376 | 0.4432 | 0.8257 | 0.8435 | 0.8257 | 0.8284 |
| 0.0768 | 47.97 | 384 | 0.4886 | 0.7982 | 0.8005 | 0.7982 | 0.7956 |
| 0.0627 | 48.97 | 392 | 0.5373 | 0.7982 | 0.8072 | 0.7982 | 0.8010 |
| 0.0688 | 49.97 | 400 | 0.5897 | 0.7798 | 0.7892 | 0.7798 | 0.7822 |
| 0.0688 | 50.97 | 408 | 0.5115 | 0.7982 | 0.8015 | 0.7982 | 0.7992 |
| 0.0676 | 51.97 | 416 | 0.4881 | 0.7982 | 0.7998 | 0.7982 | 0.7978 |
| 0.0539 | 52.97 | 424 | 0.4820 | 0.8073 | 0.8139 | 0.8073 | 0.8077 |
| 0.0596 | 53.97 | 432 | 0.4450 | 0.8257 | 0.8246 | 0.8257 | 0.8244 |
| 0.0611 | 54.97 | 440 | 0.5057 | 0.7890 | 0.8008 | 0.7890 | 0.7924 |
| 0.0611 | 55.97 | 448 | 0.4918 | 0.7982 | 0.8056 | 0.7982 | 0.8008 |
| 0.0643 | 56.97 | 456 | 0.5946 | 0.7523 | 0.7587 | 0.7523 | 0.7545 |
| 0.0605 | 57.97 | 464 | 0.4888 | 0.8073 | 0.8239 | 0.8073 | 0.8121 |
| 0.063 | 58.97 | 472 | 0.5917 | 0.7890 | 0.8051 | 0.7890 | 0.7937 |
| 0.0595 | 59.97 | 480 | 0.5117 | 0.7890 | 0.7904 | 0.7890 | 0.7894 |
| 0.0595 | 60.97 | 488 | 0.5497 | 0.7615 | 0.7692 | 0.7615 | 0.7635 |
| 0.0554 | 61.97 | 496 | 0.4742 | 0.8165 | 0.8101 | 0.8165 | 0.8126 |
| 0.0557 | 62.97 | 504 | 0.5369 | 0.7890 | 0.7886 | 0.7890 | 0.7886 |
| 0.0539 | 63.97 | 512 | 0.5440 | 0.7890 | 0.7922 | 0.7890 | 0.7899 |
| 0.048 | 64.97 | 520 | 0.5924 | 0.7890 | 0.7878 | 0.7890 | 0.7883 |
| 0.048 | 65.97 | 528 | 0.4863 | 0.8440 | 0.8440 | 0.8440 | 0.8440 |
| 0.045 | 66.97 | 536 | 0.5850 | 0.8073 | 0.8076 | 0.8073 | 0.8047 |
| 0.047 | 67.97 | 544 | 0.4939 | 0.8257 | 0.8212 | 0.8257 | 0.8227 |
| 0.0412 | 68.97 | 552 | 0.4850 | 0.7890 | 0.7912 | 0.7890 | 0.7900 |
| 0.0392 | 69.97 | 560 | 0.5066 | 0.8257 | 0.8265 | 0.8257 | 0.8258 |
| 0.0392 | 70.97 | 568 | 0.4965 | 0.8073 | 0.8053 | 0.8073 | 0.8058 |
| 0.0423 | 71.97 | 576 | 0.4717 | 0.8349 | 0.8376 | 0.8349 | 0.8351 |
| 0.0471 | 72.97 | 584 | 0.4845 | 0.8257 | 0.8378 | 0.8257 | 0.8296 |
| 0.0322 | 73.97 | 592 | 0.5188 | 0.7706 | 0.7689 | 0.7706 | 0.7693 |
| 0.042 | 74.97 | 600 | 0.5242 | 0.7706 | 0.7699 | 0.7706 | 0.7701 |
| 0.042 | 75.97 | 608 | 0.5945 | 0.7798 | 0.7824 | 0.7798 | 0.7804 |
| 0.0416 | 76.97 | 616 | 0.5432 | 0.7982 | 0.8038 | 0.7982 | 0.7993 |
| 0.0399 | 77.97 | 624 | 0.5381 | 0.7982 | 0.8072 | 0.7982 | 0.7994 |
| 0.0439 | 78.97 | 632 | 0.6181 | 0.7798 | 0.7878 | 0.7798 | 0.7827 |
| 0.0462 | 79.97 | 640 | 0.4801 | 0.8165 | 0.8182 | 0.8165 | 0.8173 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.11.0
|
Nour33/t5-small-finetuned-samsum | Nour33 | 2023-07-08T10:52:03Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-02-03T21:32:04Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-samsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7087
- Validation Loss: 1.6756
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 14728, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1000 | 1.7915 | 0 |
| 1.9259 | 1.7424 | 1 |
| 1.8512 | 1.7167 | 2 |
| 1.8005 | 1.6925 | 3 |
| 1.7655 | 1.6840 | 4 |
| 1.7392 | 1.6799 | 5 |
| 1.7204 | 1.6757 | 6 |
| 1.7087 | 1.6756 | 7 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Aryapjr14/Dream-world | Aryapjr14 | 2023-07-08T10:41:52Z | 0 | 0 | null | [
"art",
"Anime",
"Sexy",
"2.5D",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-16T10:56:18Z | ---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- art
- Anime
- Sexy
- 2.5D
--- |
Madan490/finetuned_multi_news_bart_text_summarisation | Madan490 | 2023-07-08T10:19:43Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"summarization",
"dataset:multi_news",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-07-08T09:13:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: finetuned_multi_news_bart_text_summarisation
results:
- task:
name: Sequence-to-sequence Language Modeling
type: textsummarization
dataset:
name: multi_news
type: multi_news
config: default
split: test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.4038
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_multi_news_bart_text_summarisation
This model is a fine-tuned version of [slauw87/bart_summarisation](https://huggingface.co/slauw87/bart_summarisation) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8952
- Rouge1: 0.4038
- Rouge2: 0.1389
- Rougel: 0.2155
- Rougelsum: 0.2147
- Gen Len: 138.7667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------:|
| No log | 1.0 | 15 | 2.9651 | 0.3903 | 0.134 | 0.21 | 0.2098 | 137.6 |
| No log | 2.0 | 30 | 2.8952 | 0.4038 | 0.1389 | 0.2155 | 0.2147 | 138.7667 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3 |
mpetrikov/Pixelcopter-PLE-v0 | mpetrikov | 2023-07-08T10:17:17Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-07T22:48:25Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.90 +/- 29.17
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Sukmin/dqn-SpaceInvadersNoFrameskip-v4 | Sukmin | 2023-07-08T10:11:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T10:10:52Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 565.50 +/- 178.22
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sukmin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sukmin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sukmin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
susnato/whisper-tiny-en-minds14_2 | susnato | 2023-07-08T10:08:34Z | 84 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-08T10:06:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: Whisper Tiny
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Minds 14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3919716646989374
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Minds 14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8095
- Wer Ortho: 0.4257
- Wer: 0.3920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.354 | 1.0 | 15 | 0.8095 | 0.4257 | 0.3920 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.2
|
raygx/Nepali-GPT2-CausalLM | raygx | 2023-07-08T10:03:57Z | 61 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-29T04:57:51Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: Nepali-GPT2-CausalLM
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Nepali-GPT2-CausalLM
This model is a fine-tuned version of [raygx/Nepali-GPT2-CausalLM](https://huggingface.co/raygx/Nepali-GPT2-CausalLM) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.7022
- Validation Loss: 4.6237
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.8141 | 4.6678 | 0 |
| 4.7022 | 4.6237 | 1 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Khushnur/t5-base-end2end-questions-generation_squad_aug | Khushnur | 2023-07-08T09:46:13Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-08T08:11:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-end2end-questions-generation_squad_aug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-end2end-questions-generation_squad_aug
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9281 | 0.25 | 100 | 3.0443 |
| 1.7378 | 0.5 | 200 | 3.0395 |
| 1.6719 | 0.76 | 300 | 3.0509 |
| 1.6495 | 1.01 | 400 | 3.0564 |
| 1.572 | 1.26 | 500 | 3.0780 |
| 1.5609 | 1.51 | 600 | 3.0569 |
| 1.5684 | 1.76 | 700 | 3.0696 |
| 1.5579 | 2.01 | 800 | 3.0729 |
| 1.5017 | 2.27 | 900 | 3.0898 |
| 1.5079 | 2.52 | 1000 | 3.0879 |
| 1.503 | 2.77 | 1100 | 3.0874 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aleksahet/xlm-r-squad-sr-lat | aleksahet | 2023-07-08T09:35:56Z | 107 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"sr",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-04T14:00:15Z | ---
language:
- sr
metrics:
- f1
- exact_match
library_name: transformers
pipeline_tag: question-answering
---
# XLM-R-SQuAD-sr-lat
This is XLM-R-based model finetuned on synthetic question answering dataset which is created by translating SQuAD 1.1. This model is the result of my thesis.
# Usage
```python
from transformers import pipeline
model_name = 'aleksahet/xlm-r-squad-sr-lat'
pipe = pipeline('question-answering', model=model_name, tokenizer=model_name)
sample = {
'question': 'U kom gradu je rođen Željko Obradović?',
'context': 'Željko Obradović (Čačak, 9. mart 1960) bivši je srpski i jugoslovenski košarkaš. Najuspešniji je trener u istoriji košarke.'
}
res = pipe(sample)
```
# Performance
Model was tested on synthetic question answering dataset, created by automatic translation of SQuAD 1.1 dev split. The model achieved the following results:
- Exact Match: ```71.04```
- F1: ```81.62```
# Source Code
Source code for synthetic dataset generation and model finetuning can be found on this [GitHub repository](https://github.com/aleksac99/SQuAD-SR/). |
imdanboy/kss_jets | imdanboy | 2023-07-08T09:29:54Z | 0 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"ko",
"dataset:kss",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2023-07-08T09:27:00Z | ---
tags:
- espnet
- audio
- text-to-speech
language: ko
datasets:
- kss
license: cc-by-4.0
---
## ESPnet2 TTS model
### `imdanboy/kss_jets`
This model was trained by imdanboy using kss recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 967ddbed826a7c90b75be2a7129588442d5cb6af
pip install -e .
cd egs2/kss/tts1
./run.sh --skip_data_prep false --skip_train true --download_model imdanboy/kss_jets
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_jets.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_jets_raw_phn_g2pk_no_space
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 51627
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- text2mel_loss
- min
- - train
- text2mel_loss
- min
- - train
- total_count
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 4500000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_g2pk_no_space/train/text_shape.phn
- exp/tts_stats_raw_phn_g2pk_no_space/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_g2pk_no_space/valid/text_shape.phn
- exp/tts_stats_raw_phn_g2pk_no_space/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
- - exp/tts_stats_raw_phn_g2pk_no_space/train/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_stats_raw_phn_g2pk_no_space/train/collect_feats/energy.scp
- energy
- npy
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- speech
- sound
- - exp/tts_stats_raw_phn_g2pk_no_space/valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_stats_raw_phn_g2pk_no_space/valid/collect_feats/energy.scp
- energy
- npy
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adamw
optim_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: true
token_list:
- <blank>
- <unk>
- ᅡ
- ᅵ
- ᄋ
- ᅳ
- ᄀ
- ᅥ
- ᄂ
- ᆫ
- ᄅ
- ᄌ
- ᄉ
- ᅩ
- ᆯ
- ᄆ
- .
- ᅮ
- ᄃ
- ᄒ
- ᅦ
- ᆼ
- ᅢ
- ᄇ
- ᅭ
- ᅧ
- ᄊ
- ᆷ
- ᄄ
- ᆮ
- ᄎ
- ᄁ
- ᆨ
- ᄑ
- ᄐ
- ᅪ
- ᄏ
- '?'
- ᄍ
- ᆸ
- ᅬ
- ᅣ
- ᅴ
- ᅯ
- ᅨ
- ᄈ
- ᅱ
- ᅲ
- ᅫ
- ','
- '!'
- ᅤ
- ':'
- ᅰ
- ''''
- '-'
- '"'
- /
- I
- M
- F
- E
- S
- C
- A
- B
- ㅇ
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: g2pk_no_space
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 24000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_g2pk_no_space/train/feats_stats.npz
tts: jets
tts_conf:
generator_type: jets_generator
generator_params:
adim: 256
aheads: 2
elayers: 4
eunits: 1024
dlayers: 4
dunits: 1024
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
use_masking: true
encoder_normalize_before: true
decoder_normalize_before: true
encoder_type: transformer
decoder_type: transformer
conformer_rel_pos_type: latest
conformer_pos_enc_layer_type: rel_pos
conformer_self_attn_layer_type: rel_selfattn
conformer_activation_type: swish
use_macaron_style_in_conformer: true
use_cnn_in_conformer: true
conformer_enc_kernel_size: 7
conformer_dec_kernel_size: 31
init_type: xavier_uniform
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
generator_out_channels: 1
generator_channels: 512
generator_global_channels: -1
generator_kernel_size: 7
generator_upsample_scales:
- 8
- 8
- 2
- 2
generator_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
generator_resblock_kernel_sizes:
- 3
- 7
- 11
generator_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
generator_use_additional_convs: true
generator_bias: true
generator_nonlinear_activation: LeakyReLU
generator_nonlinear_activation_params:
negative_slope: 0.1
generator_use_weight_norm: true
segment_size: 32
idim: 68
odim: 80
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 24000
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_var: 1.0
lambda_align: 1.0
sampling_rate: 24000
cache_generator_outputs: true
pitch_extract: dio
pitch_extract_conf:
reduction_factor: 1
use_token_averaged_f0: false
fs: 24000
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/tts_stats_raw_phn_g2pk_no_space/train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
reduction_factor: 1
use_token_averaged_energy: false
fs: 24000
n_fft: 1024
hop_length: 256
win_length: null
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/tts_stats_raw_phn_g2pk_no_space/train/energy_stats.npz
required:
- output_dir
- token_list
version: '202304'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
hoanghoavienvo/xroberta-base-detect-depression-stage-one-ver2 | hoanghoavienvo | 2023-07-08T09:27:49Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-08T08:22:26Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xroberta-base-detect-depression-stage-one-ver2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xroberta-base-detect-depression-stage-one-ver2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6403
- Accuracy: 0.717
- F1: 0.7893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6432 | 1.0 | 751 | 0.5957 | 0.743 | 0.8219 |
| 0.6285 | 2.0 | 1502 | 0.5813 | 0.719 | 0.8085 |
| 0.6165 | 3.0 | 2253 | 0.6459 | 0.7 | 0.7976 |
| 0.5371 | 4.0 | 3004 | 0.5612 | 0.743 | 0.8120 |
| 0.46 | 5.0 | 3755 | 0.6403 | 0.717 | 0.7893 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dvinagre/whisper-tiny-en-finetuned-polyai-minds14-en-us | dvinagre | 2023-07-08T09:14:13Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-05T14:32:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en-finetuned-polyai-minds14-en-us
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.33943329397874855
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en-finetuned-polyai-minds14-en-us
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6329
- Wer Ortho: 0.3430
- Wer: 0.3394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0009 | 17.86 | 500 | 0.6329 | 0.3430 | 0.3394 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
imtiaz114/bert-finetuned-ner-baseline-1 | imtiaz114 | 2023-07-08T09:05:38Z | 7 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-07T20:25:11Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: imtiaz114/bert-finetuned-ner-baseline-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# imtiaz114/bert-finetuned-ner-baseline-1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0916
- Validation Loss: 0.2890
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5970, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4561 | 0.3479 | 0 |
| 0.3119 | 0.2839 | 1 |
| 0.2518 | 0.2636 | 2 |
| 0.2122 | 0.2485 | 3 |
| 0.1802 | 0.2579 | 4 |
| 0.1542 | 0.2584 | 5 |
| 0.1326 | 0.2698 | 6 |
| 0.1178 | 0.2726 | 7 |
| 0.1011 | 0.2845 | 8 |
| 0.0916 | 0.2890 | 9 |
### Framework versions
- Transformers 4.31.0.dev0
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
hermanshid/distilbert-id-law | hermanshid | 2023-07-08T09:01:08Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-01T20:17:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-id-law
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-id-law
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2874 | 1.0 | 6262 | 2.1654 |
| 2.0961 | 2.0 | 12524 | 2.0036 |
| 2.0255 | 3.0 | 18786 | 1.9364 |
| 1.9767 | 4.0 | 25048 | 1.9011 |
| 1.9579 | 5.0 | 31310 | 1.8912 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Trong-Nghia/electra-base-discriminator-detect-dep | Trong-Nghia | 2023-07-08T08:56:23Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-08T08:24:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electra-base-discriminator-detect-dep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-detect-dep
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5410
- Accuracy: 0.738
- F1: 0.8104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 376 | 0.5444 | 0.742 | 0.8213 |
| 0.6126 | 2.0 | 752 | 0.5450 | 0.739 | 0.8145 |
| 0.5749 | 3.0 | 1128 | 0.5410 | 0.738 | 0.8104 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ruyaka/ppo-Huggy | ruyaka | 2023-07-08T08:46:50Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-08T08:46:44Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ruyaka/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mrizalf7/xlm-r-qa-squad2.0-squad-1.1-2 | mrizalf7 | 2023-07-08T08:44:09Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-08T08:19:16Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-qa-squad2.0-squad-1.1-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-qa-squad2.0-squad-1.1-2
This model is a fine-tuned version of [mrizalf7/xlm-r-qa-squad-2.0](https://huggingface.co/mrizalf7/xlm-r-qa-squad-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 318 | 3.1504 |
| 3.1118 | 2.0 | 636 | 3.2360 |
| 3.1118 | 3.0 | 954 | 3.3456 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ak2704/dqn-SpaceInvadersNoFrameskip-v4 | ak2704 | 2023-07-08T08:42:28Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T08:42:02Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 329.00 +/- 157.97
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ak2704 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ak2704 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ak2704
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.1),
('learning_starts', 10000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Sobiyaselvakumar/donut-base-sroie | Sobiyaselvakumar | 2023-07-08T08:38:24Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-07-07T11:14:27Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Den4ikAI/ruBert-tiny-replicas-classifier | Den4ikAI | 2023-07-08T08:17:49Z | 111 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-17T14:26:55Z | ---
license: mit
language:
- ru
widget:
- text: 'привет'
example_title: example_1
- text: 'тебя как звать'
example_title: example_2
- text: 'как приготовить рагу'
example_title: example_3
- text: 'в чем смысл жизни'
example_title: example_4
- text: 'у меня кот сбежал'
example_title: example_5
- text: 'что такое спидометр'
example_title: example_6
- text: 'меня артур зовут'
example_title: example_7
---
# Den4ikAI/ruBert-tiny-replicas-classifier
Описание классов:
1. about_user - реагирует, когда пользователь говорит о себе. Например, "меня зовут андрей"
2. question - реагирует на вопросы
3. instruct - реагирует на вопросы, ответ на которые представляет собой инструкцию. Например, "как установить windows, как приготовить борщ"
4. about_system - реагирует на вопросы о личности ассистента. Например, "как тебя зовут, ты кто такая"
5. problem - реагирует на реплики, где пользователь рассказывает о своих проблемах. Например, "у меня болит зуб, мне проткнули колесо"
6. dialogue - реагирует на диалоговые реплики. Например, "привет"
Примечание: модель обучалась без знаков '?'
# Использование
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('Den4ikAI/ruBert-tiny-replicas-classifier')
model = AutoModelForSequenceClassification.from_pretrained('Den4ikAI/ruBert-tiny-replicas-classifier')
model.to(device)
model.eval()
classes = ['instruct', 'question', 'dialogue', 'problem', 'about_system', 'about_user']
def get_sentence_type(text):
inputs = tokenizer(text, max_length=512, add_special_tokens=False, return_tensors='pt').to(device)
with torch.no_grad():
logits = model(**inputs).logits
probas = list(torch.sigmoid(logits)[0].cpu().detach().numpy())
out = classes[probas.index(max(probas))]
return out
while 1:
print(get_sentence_type(input(":> ")))
``` |
rdmpage/autotrain-lasiocampidae-73081139111 | rdmpage | 2023-07-08T08:15:33Z | 182 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:rdmpage/autotrain-data-lasiocampidae",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-08T08:09:21Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- rdmpage/autotrain-data-lasiocampidae
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 2.232916388389464
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 73081139111
- CO2 Emissions (in grams): 2.2329
## Validation Metrics
- Loss: 0.365
- Accuracy: 0.871
- Macro F1: 0.824
- Micro F1: 0.871
- Weighted F1: 0.865
- Macro Precision: 0.898
- Micro Precision: 0.871
- Weighted Precision: 0.874
- Macro Recall: 0.796
- Micro Recall: 0.871
- Weighted Recall: 0.871 |
mrizalf7/xlm-r-qa-squad2.0-squad-1.1-1 | mrizalf7 | 2023-07-08T08:14:16Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-08T08:07:24Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-qa-squad2.0-squad-1.1-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-qa-squad2.0-squad-1.1-1
This model is a fine-tuned version of [mrizalf7/xlm-r-qa-squad-2.0](https://huggingface.co/mrizalf7/xlm-r-qa-squad-2.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 318 | 3.1818 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Olegiy/q-Taxi-v3 | Olegiy | 2023-07-08T07:35:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T07:35:50Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.84
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Olegiy/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hongrui/chest_v_1 | hongrui | 2023-07-08T07:33:43Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-03T23:39:03Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/chest_v_1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/xray_v_1 dataset. You can find some example images in the following.




|
wytrnyte/q-learning-taxi-v3 | wytrnyte | 2023-07-08T07:24:16Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T07:24:15Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="wytrnyte/q-learning-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wytrnyte/q-FrozenLake-v1-4x4-noSlippery | wytrnyte | 2023-07-08T07:22:04Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T07:22:02Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="wytrnyte/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-concat-aochildes-len-16k-rarity-all-no-self-4k-1p2k | NasimB | 2023-07-08T07:20:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T05:26:57Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-len-16k-rarity-all-no-self-4k-1p2k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-len-16k-rarity-all-no-self-4k-1p2k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7394 | 0.3 | 500 | 5.6331 |
| 5.3748 | 0.59 | 1000 | 5.2044 |
| 5.0309 | 0.89 | 1500 | 4.9493 |
| 4.7518 | 1.18 | 2000 | 4.8041 |
| 4.5959 | 1.48 | 2500 | 4.6818 |
| 4.4873 | 1.77 | 3000 | 4.5795 |
| 4.3537 | 2.07 | 3500 | 4.5123 |
| 4.1676 | 2.36 | 4000 | 4.4632 |
| 4.1387 | 2.66 | 4500 | 4.3957 |
| 4.0998 | 2.95 | 5000 | 4.3479 |
| 3.8663 | 3.25 | 5500 | 4.3465 |
| 3.8329 | 3.54 | 6000 | 4.3101 |
| 3.8222 | 3.84 | 6500 | 4.2757 |
| 3.6816 | 4.13 | 7000 | 4.2834 |
| 3.5463 | 4.43 | 7500 | 4.2723 |
| 3.5397 | 4.72 | 8000 | 4.2563 |
| 3.5124 | 5.02 | 8500 | 4.2552 |
| 3.3501 | 5.31 | 9000 | 4.2619 |
| 3.3456 | 5.61 | 9500 | 4.2600 |
| 3.3437 | 5.9 | 10000 | 4.2593 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lordsauron/Taxi-v3 | lordsauron | 2023-07-08T07:16:36Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T07:16:34Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.63
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="lordsauron/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-concat-aochildes-len-16k-rarity-all-3k-p95k | NasimB | 2023-07-08T06:37:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T04:44:13Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-len-16k-rarity-all-3k-p95k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-len-16k-rarity-all-3k-p95k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7409 | 0.29 | 500 | 5.6340 |
| 5.3726 | 0.59 | 1000 | 5.2051 |
| 5.0259 | 0.88 | 1500 | 4.9531 |
| 4.7454 | 1.18 | 2000 | 4.8054 |
| 4.5934 | 1.47 | 2500 | 4.6754 |
| 4.4864 | 1.77 | 3000 | 4.5725 |
| 4.3481 | 2.06 | 3500 | 4.5028 |
| 4.1699 | 2.36 | 4000 | 4.4581 |
| 4.1348 | 2.65 | 4500 | 4.3968 |
| 4.0906 | 2.95 | 5000 | 4.3403 |
| 3.8679 | 3.24 | 5500 | 4.3395 |
| 3.8308 | 3.54 | 6000 | 4.3080 |
| 3.8137 | 3.83 | 6500 | 4.2756 |
| 3.6811 | 4.13 | 7000 | 4.2786 |
| 3.5439 | 4.42 | 7500 | 4.2680 |
| 3.5384 | 4.72 | 8000 | 4.2581 |
| 3.5122 | 5.01 | 8500 | 4.2522 |
| 3.3498 | 5.31 | 9000 | 4.2589 |
| 3.3434 | 5.6 | 9500 | 4.2583 |
| 3.3411 | 5.9 | 10000 | 4.2578 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mrizalf7/xlm-r-qa-squad2.0-squad-1.1-unmerged | mrizalf7 | 2023-07-08T06:20:59Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-06T14:37:09Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-qa-squad2.0-squad-1.1-unmerged
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-qa-squad2.0-squad-1.1-unmerged
This model is a fine-tuned version of [mrizalf7/xlm-r-qa-squad-2.0](https://huggingface.co/mrizalf7/xlm-r-qa-squad-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9127 | 1.0 | 636 | 3.2060 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lloydchang/wongstein-angry-validator | lloydchang | 2023-07-08T06:02:07Z | 188 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T05:55:33Z | ---
license: creativeml-openrail-m
---
|
CryoPhoenix9696/DragonAPIModel | CryoPhoenix9696 | 2023-07-08T06:01:40Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"en",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
] | null | 2023-07-08T06:00:10Z | ---
license: openrail
datasets:
- Open-Orca/OpenOrca
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
--- |
lovelyxs/q-FrozenLake-v1-4x4-noSlippery | lovelyxs | 2023-07-08T05:37:07Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T05:37:05Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="lovelyxs/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits