modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 12:29:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 12:28:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Pennywise881/dense-retriever-bert-base-uncased-mnr-squadv2 | Pennywise881 | 2023-02-19T13:00:03Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-02-19T12:55:37Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 5429 with parameters:
```
{'batch_size': 24}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 542,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
caioiglesias/taxi-v3 | caioiglesias | 2023-02-19T12:11:13Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T12:11:10Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.42 +/- 2.85
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="caioiglesias/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
caioiglesias/q-FrozenLake-v1-4x4-noSlippery | caioiglesias | 2023-02-19T12:09:22Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T12:09:19Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="caioiglesias/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
akmalmasud96/wav2vec2-xls-r-1b-ur | akmalmasud96 | 2023-02-19T11:48:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-19T00:52:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-1b-ur
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: ur
split: test
args: ur
metrics:
- name: Wer
type: wer
value: 0.48854134406937133
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-ur
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.4885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 12
- total_eval_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7368 | 0.48 | 300 | inf | 0.8191 |
| 1.8995 | 0.97 | 600 | inf | 0.7919 |
| 0.9144 | 1.45 | 900 | inf | 0.7805 |
| 1.166 | 1.94 | 1200 | inf | 0.7087 |
| 0.7972 | 2.42 | 1500 | inf | 0.6901 |
| 0.8604 | 2.9 | 1800 | inf | 0.6446 |
| 0.6569 | 3.39 | 2100 | inf | 0.6560 |
| 0.7267 | 3.87 | 2400 | inf | 0.6363 |
| 0.687 | 4.35 | 2700 | inf | 0.6343 |
| 0.7143 | 4.84 | 3000 | inf | 0.6176 |
| 0.5283 | 5.32 | 3300 | inf | 0.6084 |
| 0.6917 | 5.81 | 3600 | inf | 0.5942 |
| 0.5396 | 6.29 | 3900 | inf | 0.5988 |
| 0.5523 | 6.77 | 4200 | inf | 0.5600 |
| 0.3167 | 7.26 | 4500 | inf | 0.5648 |
| 0.3176 | 7.74 | 4800 | inf | 0.5424 |
| 0.3987 | 8.23 | 5100 | inf | 0.5440 |
| 0.3327 | 8.71 | 5400 | inf | 0.5316 |
| 0.1936 | 9.19 | 5700 | inf | 0.5285 |
| 0.4701 | 9.68 | 6000 | inf | 0.5207 |
| 0.3581 | 10.16 | 6300 | inf | 0.5176 |
| 0.4038 | 10.65 | 6600 | inf | 0.5259 |
| 0.2699 | 11.13 | 6900 | inf | 0.5226 |
| 0.2302 | 11.61 | 7200 | inf | 0.5181 |
| 0.3275 | 12.1 | 7500 | inf | 0.5202 |
| 0.3024 | 12.58 | 7800 | inf | 0.5307 |
| 0.2568 | 13.06 | 8100 | inf | 0.5243 |
| 0.1641 | 13.55 | 8400 | inf | 0.5073 |
| 0.2637 | 14.03 | 8700 | inf | 0.5015 |
| 0.1778 | 14.52 | 9000 | inf | 0.4892 |
| 0.0874 | 15.0 | 9300 | inf | 0.4885 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
albertqueralto/ppo-CartPole-v1 | albertqueralto | 2023-02-19T11:26:41Z | 0 | 0 | null | [
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T11:13:43Z | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 153.70 +/- 61.27
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'albertqueralto/ppo-CartPole-v1'
'token': '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'
'batch_size': 512
'minibatch_size': 128}
```
|
ashrek/dqn-SpaceInvadersNoFrameskip-v4 | ashrek | 2023-02-19T11:02:13Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-28T15:06:30Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 591.00 +/- 99.19
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ashrek -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ashrek -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ashrek
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
lauer/distilbert-base-uncased-finetuned-clinc | lauer | 2023-02-19T10:53:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-19T09:56:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
MatthewCanada/instruct-pix2pix-00-22000.safetensors | MatthewCanada | 2023-02-19T10:09:52Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
]
| null | 2023-02-19T10:09:02Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing [optional]
[More Information Needed]
### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
albertqueralto/dqn-SpaceInvadersNoFrameskip-v4 | albertqueralto | 2023-02-19T10:01:31Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T10:00:52Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 497.50 +/- 83.76
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga albertqueralto -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga albertqueralto -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga albertqueralto
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
jaimin/image_caption | jaimin | 2023-02-19T09:37:11Z | 9 | 2 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"image-captioning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| image-to-text | 2023-02-19T09:25:59Z | ---
tags:
- image-to-text
- image-captioning
license: apache-2.0
---
# Sample running code
```python
from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer
import torch
from PIL import Image
model = VisionEncoderDecoderModel.from_pretrained("jaimin/image_caption")
feature_extractor = ViTFeatureExtractor.from_pretrained("jaimin/image_caption")
tokenizer = AutoTokenizer.from_pretrained("jaimin/image_caption")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
max_length = 16
num_beams = 4
gen_kwargs = {"max_length": max_length, "num_beams": num_beams}
def predict_step(image_paths):
images = []
for image_path in image_paths:
i_image = Image.open(image_path)
if i_image.mode != "RGB":
i_image = i_image.convert(mode="RGB")
images.append(i_image)
pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
```
# Sample running code using transformers pipeline
```python
from transformers import pipeline
image_to_text = pipeline("image-to-text", model="jaimin/image_caption")
``` |
jamesthong/dqn-SpaceInvadersNoFrameskip-v4 | jamesthong | 2023-02-19T09:36:06Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T09:35:32Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 383.50 +/- 149.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mikato -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mikato -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mikato
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
pranked03/ddpm-pokemon | pranked03 | 2023-02-19T09:33:04Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"Image Generation",
"Diffusers",
"unconditional-image-generation",
"dataset:lambdalabs/pokemon-blip-captions",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-02-18T16:03:05Z | ---
datasets:
- lambdalabs/pokemon-blip-captions
pipeline_tag: unconditional-image-generation
tags:
- Image Generation
- Diffusers
--- |
mallycrip/CartPole-v1 | mallycrip | 2023-02-19T09:19:04Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T08:35:26Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 186.50 +/- 15.91
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Seungjun/t5-small-finetuned-epoch15-finetuned-epoch30 | Seungjun | 2023-02-19T09:15:41Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-19T08:16:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-epoch15-finetuned-epoch30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-epoch15-finetuned-epoch30
This model is a fine-tuned version of [Seungjun/t5-small-finetuned-epoch15](https://huggingface.co/Seungjun/t5-small-finetuned-epoch15) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4083
- Rouge1: 31.0064
- Rouge2: 19.0446
- Rougel: 27.7086
- Rougelsum: 29.5158
- Gen Len: 18.9941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6224 | 1.0 | 765 | 1.4499 | 30.3772 | 18.075 | 26.941 | 28.8424 | 18.9915 |
| 1.586 | 2.0 | 1530 | 1.4403 | 30.4972 | 18.3407 | 27.1242 | 29.0417 | 18.9908 |
| 1.5684 | 3.0 | 2295 | 1.4323 | 30.6617 | 18.4827 | 27.2642 | 29.2175 | 18.9921 |
| 1.5622 | 4.0 | 3060 | 1.4300 | 30.7155 | 18.5604 | 27.3201 | 29.2191 | 18.9941 |
| 1.5447 | 5.0 | 3825 | 1.4229 | 30.7883 | 18.7051 | 27.379 | 29.2824 | 18.9941 |
| 1.5382 | 6.0 | 4590 | 1.4199 | 30.7555 | 18.7235 | 27.4249 | 29.2612 | 18.9941 |
| 1.5303 | 7.0 | 5355 | 1.4187 | 30.7818 | 18.773 | 27.4232 | 29.2896 | 18.9941 |
| 1.5225 | 8.0 | 6120 | 1.4149 | 30.8854 | 18.8302 | 27.5499 | 29.3993 | 18.9941 |
| 1.5197 | 9.0 | 6885 | 1.4143 | 30.9201 | 18.863 | 27.5918 | 29.4395 | 18.9941 |
| 1.5123 | 10.0 | 7650 | 1.4119 | 30.9469 | 18.9403 | 27.6186 | 29.4314 | 18.9941 |
| 1.5209 | 11.0 | 8415 | 1.4107 | 30.9685 | 18.9431 | 27.6189 | 29.4673 | 18.9941 |
| 1.5091 | 12.0 | 9180 | 1.4095 | 30.9249 | 18.9679 | 27.6257 | 29.4341 | 18.9941 |
| 1.4998 | 13.0 | 9945 | 1.4091 | 30.9911 | 19.0416 | 27.695 | 29.4991 | 18.9941 |
| 1.505 | 14.0 | 10710 | 1.4085 | 30.9942 | 19.0321 | 27.6999 | 29.5025 | 18.9941 |
| 1.4965 | 15.0 | 11475 | 1.4083 | 31.0064 | 19.0446 | 27.7086 | 29.5158 | 18.9941 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
axolotron/ice-cream-animals | axolotron | 2023-02-19T09:04:06Z | 5 | 4 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"pytorch",
"dreambooth-hackathon",
"food",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-01-21T15:36:13Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- pytorch
- diffusers
- dreambooth-hackathon
- food
widget:
- text: a butterfly ice cream, icenimal
---
# Ice_cream_animals Dreambooth Model for Food trained on a custom dataset.
This is a Stable Diffusion **2.1 768px** model fine-tuned on the food concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a butterfly ice cream, icenimal**
This model was created as part of the DreamBooth Hackathon 🔥.
Samples:
A red dragon
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpnijfm61w.png">
A disney princess
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpjitwzmys.png">
A demogorgon
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpbbqipc46.png">
An elephant
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmp5u6oo1j1.png">
A bee
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpdgxfsle_.png">
An axolotl
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpowhy01r_.png">
a cat
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmp07iw9qf1.png">
Pokemon
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmp3q0ru2k_.png">
Donald Trump as ice cream
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpon6crc5e.png">
A butterfly
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpxt87y5n7.png">
|
DaydreamerF/bert-finetuned-TENBOOK-accelerate-evatest | DaydreamerF | 2023-02-19T08:35:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_accelerate",
"zh",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-13T09:33:46Z | ---
language: zh
widget:
- text: "这句话是谁说的?"
context: "“老大,你太牛逼了,把敌人军火库都给炸了,我真的佩服的五体投地,我现在忍不住想看看你藏的东西在哪里,我们快点出发吧。”代号零听完郭旭刚刚的讲述笑的拍手一直叫好。"
- text: "这句话是谁说的?"
context: "“妈,你别哭了,我这不是好着呢吗?”郭旭扶着母亲的肩膀笑着说。"
- text: "这句话是谁说的?"
context: "“总统先生,看来我们这一次在劫难逃了,大乘期的恐怖,远远超出了我们的想象,我还有一些后手能尽量拖延他一点时间,你们先走,我让我的鬼奴随你们去,去这个地方或许能保你们一线生机!”郭旭说完便偷偷地将黑暗空间的阴阳珠交给了陈天。"
- text: "这句话是谁说的?"
context: "“也罢,能活一个是一个吧!他还那么年轻?”却是剑傲天摇了摇头无奈的说道。"
tags:
- generated_from_accelerate
model-index:
- name: bert-finetuned-TENBOOK-accelerate-evatest
results: []
---
# bert-finetuned-TENBOOK-accelerate-evatest
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on a self dataset. |
cahya/indochat-tiny | cahya | 2023-02-19T06:47:18Z | 11 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"text generation",
"causal-lm",
"id",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-12T19:21:45Z | ---
language:
- id
- en
tags:
- text generation
- pytorch
- causal-lm
license: creativeml-openrail-m
metrics:
- perplexity
pipeline_tag: text-generation
---
# IndoChat-Tiny
This model is a bilingual GPT2 model fine-tuned with instructions dataset (~100K English instructions and its ~100K Indonesian translation).
The base model was a GPT2-Medium (345M params) which was pretrained with 75GB of Indonesian (99%) and English (1%) dataset. |
Seungjun/t5-small-finetuned-t5-epoch5 | Seungjun | 2023-02-19T06:34:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-19T06:12:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-t5-epoch5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-t5-epoch5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5359
- Rouge1: 29.8849
- Rouge2: 17.4399
- Rougel: 26.3643
- Rougelsum: 28.3764
- Gen Len: 18.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9706 | 1.0 | 765 | 1.6150 | 28.6734 | 16.3753 | 25.3 | 27.2566 | 18.983 |
| 1.759 | 2.0 | 1530 | 1.5689 | 29.4543 | 16.9803 | 25.9821 | 27.9832 | 18.9895 |
| 1.731 | 3.0 | 2295 | 1.5487 | 29.694 | 17.2806 | 26.2035 | 28.2027 | 18.9836 |
| 1.7108 | 4.0 | 3060 | 1.5389 | 29.9064 | 17.4929 | 26.4006 | 28.3983 | 18.9876 |
| 1.7045 | 5.0 | 3825 | 1.5359 | 29.8849 | 17.4399 | 26.3643 | 28.3764 | 18.9869 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
jxiao/poca-SoccerTwos | jxiao | 2023-02-19T06:32:09Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-19T06:32:03Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: jxiao/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Knight85/PPO-PPO-LunarLander-v2 | Knight85 | 2023-02-19T05:45:18Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T05:44:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.90 +/- 19.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Karmvir-Phogat/PPO-LunarLander-v2 | Karmvir-Phogat | 2023-02-19T05:24:41Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T05:20:16Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 283.58 +/- 18.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pnparam/loso_F02 | pnparam | 2023-02-19T05:08:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-19T04:14:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: loso_F02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loso_F02
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0784
- Wer: 1.3311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.0203 | 0.96 | 500 | 3.2523 | 1.0 |
| 2.6567 | 1.92 | 1000 | 1.7631 | 2.44 |
| 1.1186 | 2.88 | 1500 | 0.4121 | 2.4 |
| 0.3969 | 3.84 | 2000 | 0.1705 | 1.4533 |
| 0.1635 | 4.8 | 2500 | 0.0970 | 1.68 |
| 0.0915 | 5.76 | 3000 | 0.0874 | 1.4267 |
| 0.0609 | 6.72 | 3500 | 0.0784 | 1.3311 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
leo9960/Pano-Diffusion | leo9960 | 2023-02-19T04:11:01Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-19T04:11:01Z | ---
license: creativeml-openrail-m
---
|
akmalmasud96/wav2vec2-large-xlsr-53-ur | akmalmasud96 | 2023-02-19T03:49:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-19T00:40:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-53-ur
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: ur
split: test
args: ur
metrics:
- name: Wer
type: wer
value: 0.4816893775162589
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-ur
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.4817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 12
- total_eval_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0981 | 0.48 | 300 | inf | 0.9981 |
| 2.0031 | 0.97 | 600 | inf | 0.8283 |
| 0.7476 | 1.45 | 900 | inf | 0.6584 |
| 0.8585 | 1.94 | 1200 | inf | 0.5823 |
| 0.4978 | 2.42 | 1500 | inf | 0.5564 |
| 0.5423 | 2.9 | 1800 | inf | 0.5209 |
| 0.3504 | 3.39 | 2100 | inf | 0.5396 |
| 0.3185 | 3.87 | 2400 | inf | 0.4865 |
| 0.3337 | 4.35 | 2700 | inf | 0.4733 |
| 0.4935 | 4.84 | 3000 | inf | 0.4721 |
| 0.4022 | 5.32 | 3300 | inf | 0.4692 |
| 0.3517 | 5.81 | 3600 | inf | 0.4585 |
| 0.1838 | 6.29 | 3900 | inf | 0.4567 |
| 0.2635 | 6.77 | 4200 | inf | 0.4459 |
| 0.1163 | 7.26 | 4500 | inf | 0.4495 |
| 0.1776 | 7.74 | 4800 | inf | 0.4657 |
| 0.262 | 8.23 | 5100 | inf | 0.4562 |
| 0.1853 | 8.71 | 5400 | inf | 0.4724 |
| 0.3173 | 9.19 | 5700 | inf | 0.4752 |
| 0.4985 | 9.68 | 6000 | inf | 0.4604 |
| 0.3707 | 10.16 | 6300 | inf | 0.4769 |
| 0.4214 | 10.65 | 6600 | inf | 0.5246 |
| 0.3443 | 11.13 | 6900 | inf | 0.5391 |
| 0.3302 | 11.61 | 7200 | inf | 0.5051 |
| 0.327 | 12.1 | 7500 | inf | 0.5389 |
| 0.2489 | 12.58 | 7800 | inf | 0.5355 |
| 0.2328 | 13.06 | 8100 | inf | 0.5111 |
| 0.2488 | 13.55 | 8400 | inf | 0.4794 |
| 0.3255 | 14.03 | 8700 | inf | 0.4959 |
| 0.3056 | 14.52 | 9000 | inf | 0.4895 |
| 0.1758 | 15.0 | 9300 | inf | 0.4817 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
hmehta92/LaBSE-ict-content-ep15 | hmehta92 | 2023-02-19T03:33:58Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-02-19T03:27:49Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2220 with parameters:
```
{'batch_size': 1024, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3330,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ben-yu/LunarLander-v2-ppo | ben-yu | 2023-02-19T03:13:53Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T02:16:03Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 11.52 +/- 57.92
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 2000000
'learning_rate': 0.00025
'num_envs': 16
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ben-yu/LunarLander-v2-ppo'
'batch_size': 16384
'minibatch_size': 4096}
```
|
weitf/muscleAmine | weitf | 2023-02-19T02:58:19Z | 0 | 3 | null | [
"art",
"image-to-image",
"region:us"
]
| image-to-image | 2022-10-11T15:36:51Z | ---
pipeline_tag: image-to-image
tags:
- art
---
a hyper network trained by よし男's artwork.
(reference: https://www.pixiv.net/users/3584828)
only for study and self use
please do not publish or use for business.
请勿发表或商用
Author: Tongfan Wei ([email protected])
an example by base model anything v4.5, upscale model CUGAN
 |
sinny/2x | sinny | 2023-02-19T02:57:11Z | 106 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-18T08:09:34Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: sinny/2x
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nergaldarski/AnyHentai | nergaldarski | 2023-02-19T02:51:49Z | 0 | 1 | null | [
"region:us"
]
| null | 2023-02-19T02:24:41Z | https://civitai.com/models/5706/anyhentai |
jxiao/ppo-LunarLander-v2 | jxiao | 2023-02-19T02:50:58Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
]
| reinforcement-learning | 2022-12-17T22:57:23Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -112.58 +/- 59.71
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'jxiao/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
LarryAIDraw/cherryBlossomMix_v10 | LarryAIDraw | 2023-02-19T02:25:45Z | 0 | 3 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-18T20:45:26Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/10283/cherry-blossom-mix |
slopezay/q-FrozenLake-v1-4x4-noSlippery | slopezay | 2023-02-19T02:25:26Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T02:25:23Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="slopezay/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
imar0/dqn-SpaceInvadersNoFrameskip-v4 | imar0 | 2023-02-19T02:22:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-19T02:21:33Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 648.00 +/- 159.71
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga imar0 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga imar0 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga imar0
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
jhonparra18/petro-twitter-assistant-30ep | jhonparra18 | 2023-02-19T02:08:55Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"es",
"dataset:jhonparra18/petro-tweets",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-18T23:43:02Z | ---
tags:
- generated_from_trainer
model-index:
- name: petro-twitter-assistant-30ep
results: []
widget:
- text: Opino que mi gobierno es
datasets:
- jhonparra18/petro-tweets
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# petro-twitter-assistant-30ep
This model is a fine-tuned version of [flax-community/gpt-2-spanish](https://huggingface.co/flax-community/gpt-2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.123 | 2.3 | 1000 | 3.0761 |
| 2.8048 | 4.6 | 2000 | 3.0394 |
| 2.5904 | 6.9 | 3000 | 3.0743 |
| 2.3804 | 9.2 | 4000 | 3.2378 |
| 2.1736 | 11.49 | 5000 | 3.4025 |
| 1.9736 | 13.79 | 6000 | 3.6284 |
| 1.779 | 16.09 | 7000 | 3.9806 |
| 1.5993 | 18.39 | 8000 | 4.2559 |
| 1.4584 | 20.69 | 9000 | 4.4938 |
| 1.3492 | 22.99 | 10000 | 4.6608 |
| 1.2701 | 25.29 | 11000 | 4.8302 |
| 1.2309 | 27.59 | 12000 | 4.8696 |
| 1.2161 | 29.89 | 13000 | 4.8837 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Zekunli/flan-t5-large-da-multiwoz2.1_fs0.2 | Zekunli | 2023-02-19T01:51:51Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-18T22:55:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: flan-t5-large-da-multiwoz2.1_fs0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-da-multiwoz2.1_fs0.2
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3159
- Accuracy: 45.1554
- Num: 3689
- Gen Len: 15.5213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:|
| 0.9653 | 0.28 | 400 | 0.4635 | 31.3166 | 3689 | 15.196 |
| 0.5071 | 0.57 | 800 | 0.4031 | 35.8289 | 3689 | 15.6546 |
| 0.4603 | 0.85 | 1200 | 0.3718 | 37.6313 | 3689 | 15.6511 |
| 0.4219 | 1.13 | 1600 | 0.3577 | 37.9333 | 3689 | 16.5319 |
| 0.3991 | 1.42 | 2000 | 0.3491 | 40.5462 | 3689 | 15.453 |
| 0.394 | 1.7 | 2400 | 0.3409 | 40.9333 | 3689 | 15.5137 |
| 0.3822 | 1.98 | 2800 | 0.3370 | 41.2932 | 3689 | 15.225 |
| 0.3625 | 2.26 | 3200 | 0.3327 | 42.1132 | 3689 | 16.0718 |
| 0.3577 | 2.55 | 3600 | 0.3329 | 42.1372 | 3689 | 15.9973 |
| 0.3644 | 2.83 | 4000 | 0.3303 | 42.2529 | 3689 | 15.6525 |
| 0.349 | 3.11 | 4400 | 0.3256 | 43.2025 | 3689 | 15.6601 |
| 0.3355 | 3.4 | 4800 | 0.3243 | 43.791 | 3689 | 15.5451 |
| 0.338 | 3.68 | 5200 | 0.3231 | 43.5073 | 3689 | 15.7411 |
| 0.3424 | 3.96 | 5600 | 0.3196 | 44.5281 | 3689 | 15.1307 |
| 0.3299 | 4.25 | 6000 | 0.3159 | 45.1554 | 3689 | 15.5213 |
| 0.328 | 4.53 | 6400 | 0.3188 | 43.4699 | 3689 | 15.3849 |
| 0.3204 | 4.81 | 6800 | 0.3159 | 44.7764 | 3689 | 15.8219 |
| 0.3166 | 5.1 | 7200 | 0.3165 | 45.0608 | 3689 | 15.8791 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Ransaka/poca-SoccerTwos | Ransaka | 2023-02-19T01:38:31Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-18T02:54:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Ransaka/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Piro17/hq_fer2013notest | Piro17 | 2023-02-19T01:32:01Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-02-18T17:36:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: hq_fer2013notest
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7052268506235075
- name: Precision
type: precision
value: 0.7048074435355876
- name: Recall
type: recall
value: 0.7052268506235075
- name: F1
type: f1
value: 0.7036260157126459
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hq_fer2013notest
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8294
- Accuracy: 0.7052
- Precision: 0.7048
- Recall: 0.7052
- F1: 0.7036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 17
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.2982 | 1.0 | 353 | 1.2708 | 0.5635 | 0.5107 | 0.5635 | 0.5168 |
| 1.0218 | 2.0 | 706 | 1.0159 | 0.6411 | 0.6397 | 0.6411 | 0.6301 |
| 0.9437 | 3.0 | 1059 | 0.9452 | 0.6631 | 0.6698 | 0.6631 | 0.6556 |
| 0.8282 | 4.0 | 1412 | 0.8873 | 0.6829 | 0.6798 | 0.6829 | 0.6743 |
| 0.7717 | 5.0 | 1765 | 0.8612 | 0.6884 | 0.6888 | 0.6884 | 0.6835 |
| 0.7678 | 6.0 | 2118 | 0.8473 | 0.6985 | 0.6989 | 0.6985 | 0.6966 |
| 0.7096 | 7.0 | 2471 | 0.8363 | 0.7018 | 0.7001 | 0.7018 | 0.6989 |
| 0.6803 | 8.0 | 2824 | 0.8333 | 0.7036 | 0.7036 | 0.7036 | 0.7019 |
| 0.6521 | 9.0 | 3177 | 0.8309 | 0.7050 | 0.7039 | 0.7050 | 0.7028 |
| 0.6671 | 10.0 | 3530 | 0.8294 | 0.7052 | 0.7048 | 0.7052 | 0.7036 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mafwalter/question_v_statement_finetuned_roberta-basev2 | mafwalter | 2023-02-19T00:48:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-18T22:55:40Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: question_v_statement_finetuned_roberta-basev2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question_v_statement_finetuned_roberta-basev2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0052
- Accuracy: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0077 | 1.0 | 3966 | 0.0055 | 0.9991 |
| 0.0008 | 2.0 | 7932 | 0.0052 | 0.9993 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
fghfghgh/pdfdownload | fghfghgh | 2023-02-19T00:36:27Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-02-19T00:32:48Z | https://lookerstudio.google.com/reporting/fec26e0a-f6ed-4a12-a5be-25c48ee41500
https://lookerstudio.google.com/reporting/a5ba5aa5-779c-40da-898f-bee8c704c277
https://lookerstudio.google.com/reporting/c680b70c-5bb1-493b-b12d-1f5c329ca2f8
https://lookerstudio.google.com/reporting/926f6e2f-a201-4dae-9893-c0ad42c4c2ee
https://lookerstudio.google.com/reporting/11e387c8-382c-4330-ae9f-3264bd8cd688
https://lookerstudio.google.com/reporting/404ff4cf-8194-4e26-9e57-715607552d17
https://lookerstudio.google.com/reporting/a00352b4-10b5-4e9c-a021-8d935b558cd3
https://lookerstudio.google.com/reporting/2b7ba0ba-40cd-43d4-9111-7e686c1eba1b
https://lookerstudio.google.com/reporting/7f9874a0-99b4-41e8-a7e4-bba38385c420
https://lookerstudio.google.com/reporting/417763e0-69bb-44d6-a33a-34e07bcbbb30
https://lookerstudio.google.com/reporting/24fcff05-7e02-44ad-89c3-fd078308375a
https://lookerstudio.google.com/reporting/f4c383f5-d60a-440a-b710-8ea763c6b0ab
https://lookerstudio.google.com/reporting/98f59ae6-c82b-4b86-a9c2-72eb5a37d3d4
https://lookerstudio.google.com/reporting/29bc4cf6-b3ff-461f-afb2-4138696e9ff5
https://lookerstudio.google.com/u/0/reporting/fec26e0a-f6ed-4a12-a5be-25c48ee41500/page/DjD
https://lookerstudio.google.com/u/0/reporting/c680b70c-5bb1-493b-b12d-1f5c329ca2f8/page/DjD
https://lookerstudio.google.com/u/0/reporting/11e387c8-382c-4330-ae9f-3264bd8cd688/page/DjD
https://lookerstudio.google.com/u/0/reporting/404ff4cf-8194-4e26-9e57-715607552d17/page/DjD
https://lookerstudio.google.com/u/0/reporting/a00352b4-10b5-4e9c-a021-8d935b558cd3/page/DjD
https://lookerstudio.google.com/u/0/reporting/2b7ba0ba-40cd-43d4-9111-7e686c1eba1b/page/DjD
https://lookerstudio.google.com/u/0/reporting/24fcff05-7e02-44ad-89c3-fd078308375a/page/DjD
https://lookerstudio.google.com/u/0/reporting/a5ba5aa5-779c-40da-898f-bee8c704c277/page/GzfED
https://lookerstudio.google.com/u/0/reporting/7f9874a0-99b4-41e8-a7e4-bba38385c420/page/U0oDD
https://lookerstudio.google.com/u/0/reporting/417763e0-69bb-44d6-a33a-34e07bcbbb30/page/kwoDD
https://lookerstudio.google.com/u/0/reporting/f4c383f5-d60a-440a-b710-8ea763c6b0ab/page/hdjFD
https://lookerstudio.google.com/u/0/reporting/98f59ae6-c82b-4b86-a9c2-72eb5a37d3d4/page/hdjFD
https://lookerstudio.google.com/u/0/reporting/29bc4cf6-b3ff-461f-afb2-4138696e9ff5/page/hdjFD
https://lookerstudio.google.com/s/nIgoCmcvNCk
https://lookerstudio.google.com/s/myd1X3BC7rk
https://lookerstudio.google.com/s/uwQ2PuCTtow
https://lookerstudio.google.com/s/mi4WJ7hjGcs
https://lookerstudio.google.com/s/j682-1zO0tc
https://lookerstudio.google.com/s/mwhtRDxLHaQ
https://lookerstudio.google.com/s/iqXc_JKkS1o
https://lookerstudio.google.com/s/kZK7qXaV1CM
https://lookerstudio.google.com/s/tkzMGnf4qSY
https://lookerstudio.google.com/s/sG0NUbAXIX0
https://lookerstudio.google.com/s/kqRv4vT-Uzw
https://lookerstudio.google.com/s/t5B6XoztkiU
https://lookerstudio.google.com/s/lofaEUKnLc8
https://lookerstudio.google.com/s/mh8wWLy6F-8 |
akmalmasud96/xlsr-53-ur | akmalmasud96 | 2023-02-19T00:25:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-18T23:14:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: xlsr-53-ur
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fleurs
type: fleurs
config: ur_pk
split: test
args: ur_pk
metrics:
- name: Wer
type: wer
value: 0.3450557529714496
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr-53-ur
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6860
- Wer: 0.3451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 12
- total_eval_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0396 | 1.59 | 300 | 3.0179 | 1.0 |
| 0.4976 | 3.17 | 600 | 0.7037 | 0.5447 |
| 0.3062 | 4.76 | 900 | 0.5557 | 0.4036 |
| 0.2287 | 6.35 | 1200 | 0.5620 | 0.3935 |
| 0.2504 | 7.94 | 1500 | 0.5907 | 0.3677 |
| 0.0633 | 9.52 | 1800 | 0.6239 | 0.3773 |
| 0.0456 | 11.11 | 2100 | 0.6748 | 0.3604 |
| 0.0774 | 12.7 | 2400 | 0.6747 | 0.3552 |
| 0.058 | 14.29 | 2700 | 0.6860 | 0.3451 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
DiegoD616/LunarLander-v2 | DiegoD616 | 2023-02-19T00:24:15Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T23:58:32Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -118.98 +/- 36.13
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
oscarb92/cleanrl-ppo-LunarLander-v2 | oscarb92 | 2023-02-19T00:15:21Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T23:47:42Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -64.99 +/- 26.01
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'oscarb92/cleanrl-ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
sampathlonka/ppo-LunarLander-v2 | sampathlonka | 2023-02-18T23:42:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T22:13:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 294.35 +/- 13.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jhonparra18/petro-twitter-assistant | jhonparra18 | 2023-02-18T22:55:41Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"es",
"dataset:jhonparra18/petro-tweets",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-18T22:15:51Z | ---
tags:
- generated_from_trainer
model-index:
- name: petro-twitter-assistant
results: []
widget:
- text: Mi gobierno de la Colombia humana es
datasets:
- jhonparra18/petro-tweets
language:
- es
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# petro-twitter-assistant
This model is a fine-tuned version of [flax-community/gpt-2-spanish](https://huggingface.co/flax-community/gpt-2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1263 | 2.3 | 1000 | 3.0679 |
| 2.8236 | 4.6 | 2000 | 3.0305 |
| 2.6661 | 6.9 | 3000 | 3.0411 |
| 2.5905 | 9.2 | 4000 | 3.0562 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1 |
iubeda/a2c-PandaReachDense-v2 | iubeda | 2023-02-18T22:43:34Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T20:37:37Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.70 +/- 0.19
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cohogain/whisper-large-v2-ga-IE | cohogain | 2023-02-18T22:35:28Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-12-19T23:07:43Z | ---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: openai/whisper-medium
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: ga-IE
split: test
args: ga-IE
metrics:
- name: Wer
type: wer
value: 32.955865272938446
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0432
- Wer: 32.9559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 7000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2448 | 1.02 | 1000 | 0.8498 | 41.7538 |
| 0.0367 | 2.04 | 2000 | 0.8609 | 35.7724 |
| 0.0095 | 3.06 | 3000 | 0.9109 | 34.9303 |
| 0.0048 | 4.09 | 4000 | 0.9602 | 34.3496 |
| 0.0009 | 5.11 | 5000 | 1.0041 | 33.2172 |
| 0.0003 | 7.01 | 6000 | 1.0362 | 33.1010 |
| 0.0006 | 8.03 | 7000 | 1.0432 | 32.9559 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
paascorb/practica1_DL | paascorb | 2023-02-18T22:30:15Z | 0 | 0 | fastai | [
"fastai",
"region:us"
]
| null | 2023-02-18T22:30:13Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
zaib32/autotrain-pegasus_jobs_description-3576596204 | zaib32 | 2023-02-18T21:52:39Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:zaib32/autotrain-data-pegasus_jobs_description",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-02-18T21:38:56Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- zaib32/autotrain-data-pegasus_jobs_description
co2_eq_emissions:
emissions: 0.11237342972879057
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 3576596204
- CO2 Emissions (in grams): 0.1124
## Validation Metrics
- Loss: 1.169
- Rouge1: 50.657
- Rouge2: 28.360
- RougeL: 39.248
- RougeLsum: 46.279
- Gen Len: 148.200
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zaib32/autotrain-pegasus_jobs_description-3576596204
``` |
antonellaavad/daniels | antonellaavad | 2023-02-18T21:50:33Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-02-18T21:50:30Z | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: ohxs
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - daniels
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "ohxs" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: a photo of ohxs person




|
treksis/WebtoonResearch | treksis | 2023-02-18T21:33:34Z | 0 | 1 | null | [
"anime",
"manga",
"manhwa",
"webtoon",
"en",
"ko",
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-01T02:27:12Z | ---
license: creativeml-openrail-m
language:
- en
- ko
tags:
- anime
- manga
- manhwa
- webtoon
---
<h1>The goal of this repo is to</h1>
<ul>
<li>Capturing webtoon character's unique characteristics</li>
<li>Get varieties of poses, gestures and actions without damaging too many characteristics</li>
</ul>
<h3>For the LoRA inference</h3>
<ul>
<li>Current LoRA checkpoints are under development. Instruction will be added soon</li>
<li>For those who want to try out. I recommend <b>Midnight Mixers</b> as the base model.</li>
<li><b>512(width) x 640(height)</b></li>
<li><b>50 steps / 7 cfg</b>. Step below 40 would yield poor quality</li>
<li><b>0.5~0.6 range LoRA weight.</b></li>
</ul>
|
averyb123/distilbert-base-uncased-finetuned-squad | averyb123 | 2023-02-18T20:56:14Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-16T06:07:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2151 | 1.0 | 5533 | 1.1653 |
| 0.954 | 2.0 | 11066 | 1.1236 |
| 0.7472 | 3.0 | 16599 | 1.1534 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Genrry/ppo-LunarLander-v2 | Genrry | 2023-02-18T20:54:15Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T20:53:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.35 +/- 21.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
danasone/bloom-petals | danasone | 2023-02-18T20:48:26Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-18T20:39:57Z | # BLOOM, a version for Petals
This model is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom)
post-processed to be run at home using the [Petals](https://github.com/bigscience-workshop/petals#readme) swarm.
Please check out:
- The [original model card](https://huggingface.co/bigscience/bloom)
to learn about the model's capabilities, specifications, and terms of use.
- The [Petals repository](https://github.com/bigscience-workshop/petals#readme)
to learn how to install Petals and run this model over the Petals swarm.
We provide minimal code examples below.
## Using the model
```python
from petals import DistributedBloomForCausalLM
model = DistributedBloomForCausalLM.from_pretrained("bigscience/bloom-petals")
# Embeddings & prompts are on your device, BLOOM blocks are distributed across the Internet
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
```
## Serving the model blocks
```bash
python -m petals.cli.run_server bigscience/bloom-petals
```
|
erud1t3/dqn-SpaceInvadersNoFrameskip-v4 | erud1t3 | 2023-02-18T20:32:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T20:32:10Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 16.50 +/- 13.61
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga erud1t3 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga erud1t3 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga erud1t3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Zangnan/q-Taxi-v3 | Zangnan | 2023-02-18T19:47:32Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T19:47:28Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Zangnan/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LucaReggiani/t5-small-nlpfinalproject4-xsum | LucaReggiani | 2023-02-18T19:42:33Z | 3 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-18T19:30:45Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: LucaReggiani/t5-small-nlpfinalproject4-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LucaReggiani/t5-small-nlpfinalproject4-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.0688
- Validation Loss: 2.9609
- Train Rouge1: 22.9985
- Train Rouge2: 5.0413
- Train Rougel: 18.1856
- Train Rougelsum: 18.0816
- Train Gen Len: 18.67
- Epoch: 8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.98, 'epsilon': 1e-06, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 3.8921 | 3.2708 | 18.8870 | 3.0920 | 14.9668 | 14.9517 | 18.67 | 0 |
| 3.5034 | 3.1209 | 21.5417 | 3.8130 | 16.5211 | 16.5045 | 18.37 | 1 |
| 3.3763 | 3.0605 | 21.0710 | 3.6133 | 15.7808 | 15.7437 | 18.33 | 2 |
| 3.2971 | 3.0305 | 21.6173 | 4.0001 | 16.2502 | 16.2302 | 18.5 | 3 |
| 3.2452 | 3.0086 | 22.8085 | 4.9522 | 17.8831 | 17.7797 | 18.6 | 4 |
| 3.1899 | 2.9920 | 22.7903 | 5.3026 | 17.8844 | 17.8651 | 18.58 | 5 |
| 3.1514 | 2.9775 | 23.0533 | 5.3456 | 18.4312 | 18.3636 | 18.52 | 6 |
| 3.1050 | 2.9686 | 23.0767 | 5.1264 | 18.4552 | 18.3503 | 18.54 | 7 |
| 3.0688 | 2.9609 | 22.9985 | 5.0413 | 18.1856 | 18.0816 | 18.67 | 8 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
caffsean/t5-small-finetune-dzongkha-to-romanized | caffsean | 2023-02-18T19:35:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-18T18:52:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetune-dzongkha-to-romanized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetune-dzongkha-to-romanized
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5253
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 2.1667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 90 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
| No log | 2.0 | 180 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
| No log | 3.0 | 270 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
| No log | 4.0 | 360 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
| No log | 5.0 | 450 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
| 4.7286 | 6.0 | 540 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
| 4.7286 | 7.0 | 630 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
| 4.7286 | 8.0 | 720 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
| 4.7286 | 9.0 | 810 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
| 4.7286 | 10.0 | 900 | 4.5253 | 0.0 | 0.0 | 0.0 | 0.0 | 2.1667 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pcalhoun/gpt-j-6b-limericks-finetuned | pcalhoun | 2023-02-18T19:26:12Z | 16 | 2 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-01-29T20:44:32Z | ---
license: apache-2.0
widget:
- text: "<baked beans =T2R="
example_title: "Generate Rhyme Words"
- text: "<playing baseball: say \\ play \\ home \\ roam \\ day =R2L="
example_title: "Generate First Line"
---
This model is currently being fine-tuned with deepspeed+bf16 weights using the dataset from Robert A. Gonsalves' article "I Once Trained an AI to Rhyme, and It Took GPT-J a Long Time. Since the Colab was slow, I upgraded to Pro. Each limerick cost me a dime."<br>
https://towardsdatascience.com/i-once-trained-an-ai-to-rhyme-and-it-took-gpt-j-a-long-time-de1f98925e17
---
Some examples generated by the 8-bit version of this model (separately fine-tuned on 1 epoch with 1 rtx3090):<br><br>
I've a limerick model file,<br>
Which, when opened, presents a pile<br>
Of bad-to-good verse.<br>
I don't think it's much worse<br>
Than the limerick I wrote in my style.<br>
<br>
On your index cards, write down your need,<br>
And arrange them in order of speed.<br>
When you're done, you'll recall<br>
Which one's quicker than all,<br>
And you'll know which is best, if indeed.<br>
---
Unfortunately, support for 8-bit FTing doesn't seem widely available yet on HF:
*"8-bit state dicts cannot currently be loaded directly into the 8-bit model after being pushed on the Hub. This is due to the fact that the statistics (remember weight.CB and weight.SCB) computed by the model are not currently stored or taken into account inside the state dict, and the Linear8bitLt module does not support this feature yet. We think that having the ability to save that and push it to the Hub might contribute to greater accessibility."*
https://huggingface.co/blog/hf-bitsandbytes-integration#saving-8-bit-state-dicts-on-the-hub
---
Here's what the bf16 model (not the unavailable 8bit model) could do after 160 steps with batch size 24:
Prompt: Baseball<br>
0: fun \ gun \ games \ names \ on<br>
1: games \ rages \ play \ day \ cheers<br>
2: all \ call \ old \ gold \ hell<br>
3: games \ rants \ all \ call \ beers<br>
4: all \ shall \ games \ guys \ ball<br>
5: game \ name \ best \ chest \ fame<br>
6: games \ dreams \ drehs \ prehs \ kwames<br>
7: games \ fears \ yanks \ cheers \ beers<br>
Going through the multi-step process outlined by Robert A. Gonsalves in his article, it is possible to make a very crude limerick-like poem using the new bf16 trained weights, despite the fact that it hasn't seen much phonetic data yet from the training set:
You've got to be careful when you game:<br>
Don't forget that they've got a name<br>
For some of the best<br>
Baseball games on the chest<br>
If you forget, then they'll have your fame.<br>
I have no idea what that means, but it's basically a limerick.
---
Possible improvements to implement:
* Use IPA (or, as R. Gonsalves suggests, use eSpeak) instead of Festival phonetic tokens to incorporate syllable stress.
* Better align the task formatting with the model's tokenization system.
|
jordiclive/instruction-tuned-gpt-neox-20b | jordiclive | 2023-02-18T19:10:39Z | 19 | 11 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-18T18:53:53Z | Experimental 20B instruction tuned model based on gpt-neox-20b.
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
name = "jordiclive/instruction-tuned-gpt-neox-20b"
model = AutoModelForCausalLM.from_pretrained(name, device_map=chip_map, torch_dtype=torch.float16)# load_in_8bit=True )
tokenizer = AutoTokenizer.from_pretrained(name)
def generate_from_model(model, tokenizer):
encoded_input = tokenizer(text, return_tensors='pt')
output_sequences = model.generate(
input_ids=encoded_input['input_ids'].cuda(0),
do_sample=True,
max_new_tokens=35,
num_return_sequences=1,
top_p=0.95,
temperature=0.5,
penalty_alpha=0.6,
top_k=4,
output_scores=True,
return_dict_in_generate=True,
repetition_penalty=1.03,
eos_token_id=0,
use_cache=True
)
gen_sequences = output_sequences.sequences[:, encoded_input['input_ids'].shape[-1]:]
for sequence in gen_sequences:
new_line=tokenizer.decode(sequence, skip_special_tokens=True)
print(new_line)
text = "User: Will big tech A.I be adulterated with advertisement?\n\nOA:"
generate_from_model(model,tokenizer)
```
|
vishalghor/t5-small-finetuned-wikisql-sql-nl-nl-sql | vishalghor | 2023-02-18T19:03:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-12T04:25:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-wikisql-sql-nl-nl-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql-sql-nl-nl-sql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2194
- Bleu: 40.1315
- Gen Len: 16.7069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.2713 | 1.0 | 8097 | 0.2303 | 39.3173 | 16.7176 |
| 0.2549 | 2.0 | 16194 | 0.2194 | 40.1315 | 16.7069 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pnparam/loso_F04 | pnparam | 2023-02-18T19:01:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-18T18:05:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: loso_F04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loso_F04
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0791
- Wer: 1.4780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.9264 | 0.96 | 500 | 3.6742 | 1.0 |
| 2.6962 | 1.91 | 1000 | 1.7830 | 2.6233 |
| 1.1118 | 2.87 | 1500 | 0.5233 | 1.8458 |
| 0.3692 | 3.82 | 2000 | 0.1670 | 1.2423 |
| 0.1671 | 4.78 | 2500 | 0.1289 | 1.3700 |
| 0.0897 | 5.74 | 3000 | 0.1031 | 1.5110 |
| 0.0656 | 6.69 | 3500 | 0.0791 | 1.4780 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
dlicari/lsg16k-Italian-Legal-BERT-SC | dlicari | 2023-02-18T18:53:47Z | 44 | 1 | transformers | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"custom_code",
"it",
"arxiv:2210.15497",
"license:afl-3.0",
"autotrain_compatible",
"region:us"
]
| fill-mask | 2022-12-17T16:01:26Z | ---
license: afl-3.0
language:
- it
---
<img src="https://huggingface.co/dlicari/lsg16k-Italian-Legal-BERT/resolve/main/ITALIAN_LEGAL_BERT-LSG.jpg" width="600"/>
# LSG16K-Italian-LEGAL-BERT
[Local-Sparse-Global](https://arxiv.org/abs/2210.15497) version of [ITALIAN-LEGAL-BERT-SC](https://huggingface.co/dlicari/Italian-Legal-BERT-SC) by replacing the full attention in the encoder part using the LSG converter script (https://github.com/ccdv-ai/convert\_checkpoint\_to\_lsg). We used the LSG attention with 16,384 maximum sequence length, 7 global tokens, 128 local block size, 128 sparse block size, 2 sparsity factors, 'norm' sparse selection pattern (select the highest norm tokens). |
Lakoc/PPO8.1-LunarLander-v2 | Lakoc | 2023-02-18T18:35:53Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T18:35:47Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -104.13 +/- 65.72
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 16
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.98
'gae_lambda': 0.98
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Lakoc/PPO8.1-LunarLander-v2'
'batch_size': 16384
'minibatch_size': 4096}
```
|
Lakoc/PPO8-LunarLander-v2 | Lakoc | 2023-02-18T18:28:07Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T18:24:03Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -122.19 +/- 67.69
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 16
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.98
'gae_lambda': 0.98
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Lakoc/PPO8.1-LunarLander-v2'
'batch_size': 16384
'minibatch_size': 4096}
```
|
seungwoos/q-FrozenLake-v1-4x4-noSlippery | seungwoos | 2023-02-18T18:18:53Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T18:18:48Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="seungwoos/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jackmedda/q-Taxi-v3 | jackmedda | 2023-02-18T18:15:09Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T17:25:49Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jackmedda/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dlicari/distil-ita-legal-bert | dlicari | 2023-02-18T18:14:49Z | 59 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:afl-3.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-12-10T10:25:42Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: afl-3.0
---
<img src="https://huggingface.co/dlicari/distil-ita-legal-bert/resolve/main/ITALIAN_LEGAL_BERT-DI.jpg" width="600"/>
# DISTIL-ITA-LEGAL-BERT
We used the process of knowledge distillation to create a fast, lightweight student model with only 4-levels of Transformers,
capable of producing sentence embeddings similar to those produced by the more complex
[ITALIAN-LEGAL-BERT](dlicari/Italian-Legal-BERT) teacher model.
It optimized on the ITALIAN-LEGAL-BERT train set (3.7 GB) using Sentence-BERT library by minimizing the mean square error (MSE) between its embeddings
and those produced by the teacher model.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('dlicari/distil-ita-legal-bert')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('dlicari/distil-ita-legal-bert')
model = AutoModel.from_pretrained('dlicari/distil-ita-legal-bert')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 409633 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 5000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Petro28/Cyber_028 | Petro28 | 2023-02-18T18:01:46Z | 0 | 0 | allennlp | [
"allennlp",
"finance",
"text-classification",
"aa",
"dataset:gsdf/EasyNegative",
"license:openrail",
"region:us"
]
| text-classification | 2023-02-18T18:01:03Z | ---
license: openrail
datasets:
- gsdf/EasyNegative
language:
- aa
metrics:
- bertscore
library_name: allennlp
pipeline_tag: text-classification
tags:
- finance
--- |
mahmoud-mohey/a2c-PandaReachDense-v2 | mahmoud-mohey | 2023-02-18T17:53:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T17:51:18Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.91 +/- 0.38
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
huggingtweets/elonmusk-svembu | huggingtweets | 2023-02-18T17:50:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-18T17:49:20Z | ---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-svembu/1676742622036/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1568853371146338308/w87i8uhE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Sridhar Vembu</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-svembu</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Sridhar Vembu.
| Data | Elon Musk | Sridhar Vembu |
| --- | --- | --- |
| Tweets downloaded | 3193 | 3248 |
| Retweets | 174 | 264 |
| Short tweets | 1048 | 45 |
| Tweets kept | 1971 | 2939 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4x30aqaf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-svembu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ryim7xj2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ryim7xj2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-svembu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
phd411r1/SajjadAyoubi_xlm-roberta-large-fa-qa_finetune_on_hoshfa_3 | phd411r1 | 2023-02-18T17:47:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-18T17:15:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: SajjadAyoubi_xlm-roberta-large-fa-qa_finetune_on_hoshfa_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SajjadAyoubi_xlm-roberta-large-fa-qa_finetune_on_hoshfa_3
This model is a fine-tuned version of [SajjadAyoubi/xlm-roberta-large-fa-qa](https://huggingface.co/SajjadAyoubi/xlm-roberta-large-fa-qa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4424 | 1.0 | 1500 | 2.0999 |
| 1.8186 | 2.0 | 3000 | 1.2042 |
| 1.2822 | 3.0 | 4500 | 0.8894 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
neatbullshit/dqn-SpaceInvadersNoFrameskip-v4 | neatbullshit | 2023-02-18T17:33:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T17:27:40Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 194.00 +/- 131.47
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga neatbullshit -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga neatbullshit -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga neatbullshit
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ArneL2206/ppo-LunarLander-v2 | ArneL2206 | 2023-02-18T17:21:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-10T21:25:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 295.53 +/- 17.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JUNGU/Taxi-v3 | JUNGU | 2023-02-18T17:03:33Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T17:03:24Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JUNGU/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jackmedda/q-FrozenLake-v1-4x4-noSlippery | jackmedda | 2023-02-18T17:02:31Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T17:02:27Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jackmedda/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JUNGU/q-FrozenLake-v1-4x4-noSlippery | JUNGU | 2023-02-18T17:01:27Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T17:01:19Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="JUNGU/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MoonKBR/ppo-LunarLander-v2 | MoonKBR | 2023-02-18T17:00:16Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T16:15:56Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.21 +/- 23.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rdesarz/dqn-atari | rdesarz | 2023-02-18T16:50:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T16:50:07Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 607.00 +/- 169.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rdesarz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rdesarz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rdesarz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
inkoziev/paraphraser | inkoziev | 2023-02-18T16:49:04Z | 25 | 4 | transformers | [
"transformers",
"pytorch",
"gpt2",
"paraphrasing",
"seq2seq",
"ru",
"dataset:inkoziev/paraphrases",
"license:cc-by-nc-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2023-01-05T09:17:17Z | ---
language: ru
license: cc-by-nc-4.0
tags:
- paraphrasing
- seq2seq
datasets:
- inkoziev/paraphrases
---
## Поэтический перефразировщик
Это генеративная модель на основе ```sberbank-ai/rugpt3large_based_on_gpt2```, дообученной
на датасете перефразировок [inkoziev/paraphrases](https://huggingface.co/datasets/inkoziev/paraphrases).
Она разработана для использования в проекте [генеративной поэзии](https://github.com/Koziev/verslibre).
Код для тренировки и использования перефразировщика доступен в репозитрии [https://github.com/Koziev/paraphraser](https://github.com/Koziev/paraphraser).
### Особенности перефразировки
Обращаю внимание, что модель **не предназначена** для использования там, где требуется
особо аккуратная работа с именованными сущностями. Так как в стихах не возникает особых проблем (более того,
в некоторых сценариях использования это даже желательно), если перефразировки теряют или добавляют некоторую семантику в исходный текст, то обучающий датасет
и модель на его основе может путать дни недели, имена, добавлять что-то от себя, быть метафоричной или иносказательной.
### Методика файнтюна
В обучающем датасете есть негативные примеры перефразировок, и я использую их вместе с правильными примерами в ходе файнтюна,
подавая на классификационную голову в [GPT2DoubleHeadsModel](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2DoubleHeadsModel).
Код, выполняющий файнтюн, доступен [тут](https://github.com/Koziev/paraphraser/blob/main/train_paraphraser_with_gpt2doublehead.py).
Такой подход к файнтюну оказался лучше, чем два других подхода:
1) дефолтный способ файнтюна, когда GPT дообучается просто на текстах, состоящих из исходного текста и перефразировки,
разделенных специальным токеном. В этом подходе модель обучается также на токенах затравки, что может быть нежелательным.
2) вариация первого способа, в котором токены затравки (исходного текста) исключаются из обратного распространения с помощью
задания labels=-100 ([код](https://github.com/Koziev/paraphraser/blob/main/finetune_paraphraser_with_prompt_masking.py)).
В качестве метрики для сравнения подходов и для подбора числа неверных вариантов перефразировки в GPT2DoubleHeadsModel
использована комбинация из:
1) близость векторов эмбеддингов исходного текста и сгенерированной перефразировки. Векторы получаются с помощью
модели ```sberbank-ai/sbert_large_mt_nlu_ru```. Я не стал использовать [модель-критик](https://huggingface.co/inkoziev/sbert_synonymy),
поскольку она обучалась на таком же датасете.
2) дисконтируем результаты п.1 символьной близостью (3-граммы) по коэффициенту Жаккара. Это штрафует перестановочные
перефразировки, воспроизведение исходного текста и небольшие переписывания.
### Формат входных данных
На вход модели подается исходный текст с добавлением токенов ```<s>``` в начале и ```<sep>``` в конце, например:
```
input_text = '<s>Мороз и солнце, день чудесный<sep>'
```
Результат генерации будет содержать текст с токеном ```</s>``` - это конец последовательности.
### Пример использования
Следующий код позволяет ввести в консоли короткое предложение
и видеть результат ее перефразировки моделью:
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "inkoziev/paraphraser"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.to(device)
model.eval()
while True:
seed = input(':> ').strip()
encoded_prompt = tokenizer.encode("<s>" + seed + "<sep>", add_special_tokens=False, return_tensors="pt").to(device)
output_sequences = model.generate(input_ids=encoded_prompt,
max_length=100,
typical_p=0.85,
top_k=0,
top_p=1.0,
do_sample=True,
num_return_sequences=10,
pad_token_id=tokenizer.pad_token_id)
for o in output_sequences:
text = tokenizer.decode(o.tolist(), clean_up_tokenization_spaces=True)
text = text[text.index('<sep>') + 5:]
text = text[: text.find('</s>')]
print(text)
```
|
JUNGU/ppo-LunarLander-v2 | JUNGU | 2023-02-18T16:47:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T22:19:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.62 +/- 26.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
phd411r1/SajjadAyoubi_xlm-roberta-large-fa-qa-finetune_on_hoshfa | phd411r1 | 2023-02-18T16:25:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-18T16:08:03Z | ---
tags:
- generated_from_trainer
model-index:
- name: SajjadAyoubi_xlm-roberta-large-fa-qa-finetune_on_hoshfa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SajjadAyoubi_xlm-roberta-large-fa-qa-finetune_on_hoshfa
This model is a fine-tuned version of [SajjadAyoubi/xlm-roberta-large-fa-qa](https://huggingface.co/SajjadAyoubi/xlm-roberta-large-fa-qa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2718 | 1.0 | 2249 | 1.4810 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jackmedda/ppo-Huggy | jackmedda | 2023-02-18T16:22:58Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-18T15:01:42Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: jackmedda/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CoreyMorris/lander-delete-me | CoreyMorris | 2023-02-18T16:12:47Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T16:07:28Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -105.96 +/- 18.63
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 16
'num_steps': 1024
'anneal_lr': True
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'CoreyMorris/lander-delete-me'
'batch_size': 16384
'minibatch_size': 4096}
```
|
wang12s/Nullmix | wang12s | 2023-02-18T16:02:28Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
]
| null | 2023-02-18T09:03:59Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing [optional]
[More Information Needed]
### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
AntiSquid/Reinforce-Pixelcopter-PLE-v0 | AntiSquid | 2023-02-18T15:51:59Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T11:58:04Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.20 +/- 30.98
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
EllenWST/Cindy | EllenWST | 2023-02-18T15:15:12Z | 0 | 0 | null | [
"ab",
"dataset:Gustavosta/Stable-Diffusion-Prompts",
"license:openrail",
"region:us"
]
| null | 2023-02-18T15:13:01Z | ---
license: openrail
datasets:
- Gustavosta/Stable-Diffusion-Prompts
language:
- ab
--- |
LucaReggiani/t5-small-nlpfinalproject3-xsum | LucaReggiani | 2023-02-18T14:49:19Z | 3 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-18T14:37:47Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: LucaReggiani/t5-small-nlpfinalproject3-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LucaReggiani/t5-small-nlpfinalproject3-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9916
- Validation Loss: 2.9840
- Train Rouge1: 22.3282
- Train Rouge2: 4.7253
- Train Rougel: 17.9286
- Train Rougelsum: 17.9126
- Train Gen Len: 18.54
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.98, 'epsilon': 1e-06, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 3.7453 | 3.2008 | 19.6619 | 3.4413 | 15.6174 | 15.6444 | 18.23 | 0 |
| 3.3973 | 3.1041 | 21.1867 | 4.1818 | 16.5584 | 16.4568 | 18.59 | 1 |
| 3.2886 | 3.0600 | 21.8364 | 4.3416 | 16.6696 | 16.6382 | 18.5 | 2 |
| 3.2216 | 3.0323 | 23.5970 | 5.3080 | 18.4737 | 18.3755 | 18.49 | 3 |
| 3.1462 | 3.0174 | 23.0720 | 5.3486 | 18.5011 | 18.4635 | 18.62 | 4 |
| 3.0860 | 3.0017 | 22.3949 | 4.7088 | 17.7759 | 17.7328 | 18.51 | 5 |
| 3.0436 | 2.9890 | 22.8096 | 4.9911 | 18.1200 | 18.0347 | 18.47 | 6 |
| 2.9916 | 2.9840 | 22.3282 | 4.7253 | 17.9286 | 17.9126 | 18.54 | 7 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
hpoddar/ppo-Huggy | hpoddar | 2023-02-18T14:46:43Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-18T14:46:36Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: hpoddar/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rdesarz/q-FrozenLake-v1-4x4-noSlippery | rdesarz | 2023-02-18T14:44:13Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T14:44:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rdesarz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ibadrehman/ppo-Pyramids | ibadrehman | 2023-02-18T14:33:27Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-18T14:33:21Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: ibadrehman/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NielsV/distilbart-cnn-6-6-reddit | NielsV | 2023-02-18T14:27:47Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:reddit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-01-18T22:28:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- reddit
metrics:
- rouge
model-index:
- name: distilbart-cnn-6-6-reddit
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: reddit
type: reddit
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1849
---
# distilbart-cnn-6-6-reddit
This model is a fine-tuned version of [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6) on the reddit dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9883
- Rouge1: 0.1849
- Rouge2: 0.0437
- Rougel: 0.1273
- Rougelsum: 0.1601
## More information and training script
You can find more information about how this model was trained, including the actual training script in [this github repository](https://github.com/VerleysenNiels/arxiv-summarizer).
## Training and evaluation data
I made a split in a train and test set. The test size is 1% of the total dataset, which comes down to about 38k samples.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.13 | 1.0 | 238116 | 3.2736 | 0.1773 | 0.0392 | 0.1223 | 0.1539 |
| 2.8586 | 2.0 | 476232 | 3.0449 | 0.1846 | 0.0431 | 0.127 | 0.1601 |
| 2.7844 | 3.0 | 714348 | 2.9883 | 0.1849 | 0.0437 | 0.1273 | 0.1601 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Zhengrui/bert2bert_redditJoke | Zhengrui | 2023-02-18T14:24:20Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:SocialGrep/one-million-reddit-jokes",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-18T12:25:14Z | ---
license: apache-2.0
datasets:
- SocialGrep/one-million-reddit-jokes
language:
- en
pipeline_tag: text2text-generation
--- |
saikiranp/ppo-LunarLandr-v2-CleanRL | saikiranp | 2023-02-18T14:02:48Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T13:28:07Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -26.69 +/- 86.05
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 2000000
'learning_rate': 0.00025
'num_envs': 16
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'saikiranp/ppo-LunarLandr-v2-CleanRL'
'batch_size': 16384
'minibatch_size': 4096}
```
|
ZhihongDeng/a2c-PandaReachDense-v2 | ZhihongDeng | 2023-02-18T13:55:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T13:53:34Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.68 +/- 0.73
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Awaaaaa/ba | Awaaaaa | 2023-02-18T13:48:15Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2023-02-18T13:44:18Z | ---
license: bigscience-openrail-m
---
|
ibadrehman/ppo-SnowballTarget | ibadrehman | 2023-02-18T13:35:09Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-18T13:35:04Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: ibadrehman/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Slitwrist/sd-1-5-brenda | Slitwrist | 2023-02-18T13:29:35Z | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-18T13:26:43Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### SD-1-5-Brenda Dreambooth model trained by Slitwrist with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook
Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)!
To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars).
Sample pictures of this concept:
|
r1ck/v1-ppo-LunarLander-v2 | r1ck | 2023-02-18T13:26:04Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T13:23:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.13 +/- 18.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Taratata/Reinforce-Pixelcopter-PLE-v0 | Taratata | 2023-02-18T13:24:38Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-18T13:11:56Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.70 +/- 27.76
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Rafe350/rafeadtest2023-model1 | Rafe350 | 2023-02-18T13:21:04Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-18T13:19:38Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: rfehx
---
### RafeAdTest2023-Model1 Dreambooth model trained by Rafe350 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
rfehx (use that on your prompt)

|
Subsets and Splits