modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 12:29:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 12:24:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Weili/vit-base-patch16-224-finetuned-cifar10 | Weili | 2022-12-07T02:42:03Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-07T00:52:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9876
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0427
- Accuracy: 0.9876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2518 | 1.0 | 390 | 0.0609 | 0.9821 |
| 0.1985 | 2.0 | 780 | 0.0532 | 0.983 |
| 0.197 | 3.0 | 1170 | 0.0427 | 0.9876 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
YoanG/ppo-LunarLander-v2 | YoanG | 2022-12-07T02:39:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T02:38:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.42 +/- 21.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cyycyy/xlm-roberta-base-finetuned-panx-en | cyycyy | 2022-12-07T02:37:28Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T02:33:13Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6850704225352112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4130
- F1: 0.6851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1435 | 1.0 | 50 | 0.5604 | 0.5493 |
| 0.513 | 2.0 | 100 | 0.4557 | 0.6504 |
| 0.3744 | 3.0 | 150 | 0.4130 | 0.6851 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sd-concepts-library/dulls | sd-concepts-library | 2022-12-07T02:35:30Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2022-12-07T02:35:22Z | ---
license: mit
---
### dulls on Stable Diffusion
This is the `<dulls-avatar>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
bguan/ppo-Huggy | bguan | 2022-12-07T02:35:20Z | 15 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-07T02:35:14Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: bguan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sd-dreambooth-library/uploadtest | sd-dreambooth-library | 2022-12-07T02:31:04Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-12-07T02:31:04Z | ---
license: bigscience-openrail-m
---
|
enzokro/sd-class-butterflies-huber-32 | enzokro | 2022-12-07T02:10:02Z | 6 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-07T02:09:29Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
It was trained with a SmoothL1 loss with Beta = 1 (aka same as Huber Loss).
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('enzokro/sd-class-butterflies-huber-32')
image = pipeline().images[0]
image
```
|
cyycyy/xlm-roberta-base-finetuned-panx-de-fr | cyycyy | 2022-12-07T01:22:01Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-07T01:12:57Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1624
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.289 | 1.0 | 715 | 0.1831 | 0.8193 |
| 0.1471 | 2.0 | 1430 | 0.1527 | 0.8507 |
| 0.0938 | 3.0 | 2145 | 0.1624 | 0.8591 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
damiah/sd-class-butterflies-32 | damiah | 2022-12-07T01:12:07Z | 1 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-07T01:11:05Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(damiah/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
dzegan/unit1_ppo | dzegan | 2022-12-07T00:56:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T00:56:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.64 +/- 40.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fezhou/ddpm-butterflies-128 | fezhou | 2022-12-07T00:23:42Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-12-06T22:30:12Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/fezhou/ddpm-butterflies-128/tensorboard?#scalars)
|
Nhat1904/Final-32shots-Twitter-Skhead-Train-5epoch-bad | Nhat1904 | 2022-12-07T00:19:17Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-07T00:19:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 400,
"warmup_steps": 40,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Zesty/ppo-LunarLander-v2 | Zesty | 2022-12-07T00:07:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-07T00:07:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.83 +/- 17.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nelsonsilva/ppo-LunarLander-v2 | nelsonsilva | 2022-12-06T23:48:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T22:44:53Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.26 +/- 23.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
davidaponte/whisper-small-hindi | davidaponte | 2022-12-06T23:45:11Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T17:22:32Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hindi - David Aponte
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 32.29492931516126
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hindi - David Aponte
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4495
- Wer: 32.2949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0955 | 2.44 | 1000 | 0.3022 | 34.1192 |
| 0.0236 | 4.89 | 2000 | 0.3543 | 32.6759 |
| 0.0018 | 7.33 | 3000 | 0.4257 | 32.8113 |
| 0.0005 | 9.78 | 4000 | 0.4495 | 32.2949 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pratultandon/recipe-nlg-gpt2-ingredient-to-recipe-model | pratultandon | 2022-12-06T23:31:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-06T11:54:35Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: recipe-nlg-gpt2-ingredient-to-recipe-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-nlg-gpt2-ingredient-to-recipe-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
daripaez/ppo_2-LunarLander-v2 | daripaez | 2022-12-06T23:15:39Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T23:15:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 285.45 +/- 19.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tomekkorbak/boring_mcclintock | tomekkorbak | 2022-12-06T23:05:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/pii-pile-chunk3-0-50000",
"dataset:tomekkorbak/pii-pile-chunk3-50000-100000",
"dataset:tomekkorbak/pii-pile-chunk3-100000-150000",
"dataset:tomekkorbak/pii-pile-chunk3-150000-200000",
"dataset:tomekkorbak/pii-pile-chunk3-200000-250000",
"dataset:tomekkorbak/pii-pile-chunk3-250000-300000",
"dataset:tomekkorbak/pii-pile-chunk3-300000-350000",
"dataset:tomekkorbak/pii-pile-chunk3-350000-400000",
"dataset:tomekkorbak/pii-pile-chunk3-400000-450000",
"dataset:tomekkorbak/pii-pile-chunk3-450000-500000",
"dataset:tomekkorbak/pii-pile-chunk3-500000-550000",
"dataset:tomekkorbak/pii-pile-chunk3-550000-600000",
"dataset:tomekkorbak/pii-pile-chunk3-600000-650000",
"dataset:tomekkorbak/pii-pile-chunk3-650000-700000",
"dataset:tomekkorbak/pii-pile-chunk3-700000-750000",
"dataset:tomekkorbak/pii-pile-chunk3-750000-800000",
"dataset:tomekkorbak/pii-pile-chunk3-800000-850000",
"dataset:tomekkorbak/pii-pile-chunk3-850000-900000",
"dataset:tomekkorbak/pii-pile-chunk3-900000-950000",
"dataset:tomekkorbak/pii-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/pii-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/pii-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/pii-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/pii-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/pii-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/pii-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/pii-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/pii-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/pii-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/pii-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/pii-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/pii-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/pii-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/pii-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/pii-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/pii-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/pii-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/pii-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/pii-pile-chunk3-1900000-1950000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-05T18:00:42Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/pii-pile-chunk3-0-50000
- tomekkorbak/pii-pile-chunk3-50000-100000
- tomekkorbak/pii-pile-chunk3-100000-150000
- tomekkorbak/pii-pile-chunk3-150000-200000
- tomekkorbak/pii-pile-chunk3-200000-250000
- tomekkorbak/pii-pile-chunk3-250000-300000
- tomekkorbak/pii-pile-chunk3-300000-350000
- tomekkorbak/pii-pile-chunk3-350000-400000
- tomekkorbak/pii-pile-chunk3-400000-450000
- tomekkorbak/pii-pile-chunk3-450000-500000
- tomekkorbak/pii-pile-chunk3-500000-550000
- tomekkorbak/pii-pile-chunk3-550000-600000
- tomekkorbak/pii-pile-chunk3-600000-650000
- tomekkorbak/pii-pile-chunk3-650000-700000
- tomekkorbak/pii-pile-chunk3-700000-750000
- tomekkorbak/pii-pile-chunk3-750000-800000
- tomekkorbak/pii-pile-chunk3-800000-850000
- tomekkorbak/pii-pile-chunk3-850000-900000
- tomekkorbak/pii-pile-chunk3-900000-950000
- tomekkorbak/pii-pile-chunk3-950000-1000000
- tomekkorbak/pii-pile-chunk3-1000000-1050000
- tomekkorbak/pii-pile-chunk3-1050000-1100000
- tomekkorbak/pii-pile-chunk3-1100000-1150000
- tomekkorbak/pii-pile-chunk3-1150000-1200000
- tomekkorbak/pii-pile-chunk3-1200000-1250000
- tomekkorbak/pii-pile-chunk3-1250000-1300000
- tomekkorbak/pii-pile-chunk3-1300000-1350000
- tomekkorbak/pii-pile-chunk3-1350000-1400000
- tomekkorbak/pii-pile-chunk3-1400000-1450000
- tomekkorbak/pii-pile-chunk3-1450000-1500000
- tomekkorbak/pii-pile-chunk3-1500000-1550000
- tomekkorbak/pii-pile-chunk3-1550000-1600000
- tomekkorbak/pii-pile-chunk3-1600000-1650000
- tomekkorbak/pii-pile-chunk3-1650000-1700000
- tomekkorbak/pii-pile-chunk3-1700000-1750000
- tomekkorbak/pii-pile-chunk3-1750000-1800000
- tomekkorbak/pii-pile-chunk3-1800000-1850000
- tomekkorbak/pii-pile-chunk3-1850000-1900000
- tomekkorbak/pii-pile-chunk3-1900000-1950000
model-index:
- name: boring_mcclintock
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# boring_mcclintock
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0.0},
'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048,
'prefix': '<|aligned|>'}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'boring_mcclintock',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/c17x87uu |
JoelMendez/hf-diffusion-models-class-1-butterflies-JoelMendez | JoelMendez | 2022-12-06T22:59:52Z | 2 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-06T22:58:52Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('JoelMendez/hf-diffusion-models-class-1-butterflies-JoelMendez')
image = pipeline().images[0]
image
```
|
muhtasham/finetuned-self_mlm_mini | muhtasham | 2022-12-06T22:56:50Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T22:38:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuned-self_mlm_mini
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8224
- name: F1
type: f1
value: 0.9025460930640913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-self_mlm_mini
This model is a fine-tuned version of [muhtasham/bert-tiny-mlm-finetuned-imdb](https://huggingface.co/muhtasham/bert-tiny-mlm-finetuned-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6150
- Accuracy: 0.8224
- F1: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4426 | 2.55 | 500 | 0.4673 | 0.7928 | 0.8844 |
| 0.2845 | 5.1 | 1000 | 0.3099 | 0.8697 | 0.9303 |
| 0.2282 | 7.65 | 1500 | 0.3432 | 0.8589 | 0.9241 |
| 0.1819 | 10.2 | 2000 | 0.2702 | 0.8998 | 0.9472 |
| 0.1461 | 12.76 | 2500 | 0.4852 | 0.8344 | 0.9097 |
| 0.111 | 15.31 | 3000 | 0.6807 | 0.7950 | 0.8858 |
| 0.0883 | 17.86 | 3500 | 0.6150 | 0.8224 | 0.9025 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
zyoscovits/ppo-Huggy | zyoscovits | 2022-12-06T22:13:52Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-06T22:13:46Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: zyoscovits/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Tomas1234/common_voice | Tomas1234 | 2022-12-06T22:09:19Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"lt",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-05T10:26:56Z | ---
language:
- lt
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small lt - Lithuanian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: None
split: None
args: 'config: lt, split: test'
metrics:
- name: Wer
type: wer
value: 32.49711764004294
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small lt - Lithuanian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3840
- Wer: 32.4971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3788 | 0.9 | 500 | 0.4432 | 45.1716 |
| 0.2087 | 1.8 | 1000 | 0.3671 | 37.6456 |
| 0.0961 | 2.7 | 1500 | 0.3548 | 35.5703 |
| 0.0479 | 3.6 | 2000 | 0.3609 | 34.1709 |
| 0.0157 | 4.5 | 2500 | 0.3665 | 33.3400 |
| 0.0089 | 5.4 | 3000 | 0.3775 | 32.7754 |
| 0.0038 | 6.29 | 3500 | 0.3826 | 32.5607 |
| 0.0033 | 7.19 | 4000 | 0.3840 | 32.4971 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
tomekkorbak/confident_knuth | tomekkorbak | 2022-12-06T21:38:23Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/pii-pile-chunk3-0-50000",
"dataset:tomekkorbak/pii-pile-chunk3-50000-100000",
"dataset:tomekkorbak/pii-pile-chunk3-100000-150000",
"dataset:tomekkorbak/pii-pile-chunk3-150000-200000",
"dataset:tomekkorbak/pii-pile-chunk3-200000-250000",
"dataset:tomekkorbak/pii-pile-chunk3-250000-300000",
"dataset:tomekkorbak/pii-pile-chunk3-300000-350000",
"dataset:tomekkorbak/pii-pile-chunk3-350000-400000",
"dataset:tomekkorbak/pii-pile-chunk3-400000-450000",
"dataset:tomekkorbak/pii-pile-chunk3-450000-500000",
"dataset:tomekkorbak/pii-pile-chunk3-500000-550000",
"dataset:tomekkorbak/pii-pile-chunk3-550000-600000",
"dataset:tomekkorbak/pii-pile-chunk3-600000-650000",
"dataset:tomekkorbak/pii-pile-chunk3-650000-700000",
"dataset:tomekkorbak/pii-pile-chunk3-700000-750000",
"dataset:tomekkorbak/pii-pile-chunk3-750000-800000",
"dataset:tomekkorbak/pii-pile-chunk3-800000-850000",
"dataset:tomekkorbak/pii-pile-chunk3-850000-900000",
"dataset:tomekkorbak/pii-pile-chunk3-900000-950000",
"dataset:tomekkorbak/pii-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/pii-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/pii-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/pii-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/pii-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/pii-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/pii-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/pii-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/pii-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/pii-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/pii-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/pii-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/pii-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/pii-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/pii-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/pii-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/pii-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/pii-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/pii-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/pii-pile-chunk3-1900000-1950000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-05T18:00:30Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/pii-pile-chunk3-0-50000
- tomekkorbak/pii-pile-chunk3-50000-100000
- tomekkorbak/pii-pile-chunk3-100000-150000
- tomekkorbak/pii-pile-chunk3-150000-200000
- tomekkorbak/pii-pile-chunk3-200000-250000
- tomekkorbak/pii-pile-chunk3-250000-300000
- tomekkorbak/pii-pile-chunk3-300000-350000
- tomekkorbak/pii-pile-chunk3-350000-400000
- tomekkorbak/pii-pile-chunk3-400000-450000
- tomekkorbak/pii-pile-chunk3-450000-500000
- tomekkorbak/pii-pile-chunk3-500000-550000
- tomekkorbak/pii-pile-chunk3-550000-600000
- tomekkorbak/pii-pile-chunk3-600000-650000
- tomekkorbak/pii-pile-chunk3-650000-700000
- tomekkorbak/pii-pile-chunk3-700000-750000
- tomekkorbak/pii-pile-chunk3-750000-800000
- tomekkorbak/pii-pile-chunk3-800000-850000
- tomekkorbak/pii-pile-chunk3-850000-900000
- tomekkorbak/pii-pile-chunk3-900000-950000
- tomekkorbak/pii-pile-chunk3-950000-1000000
- tomekkorbak/pii-pile-chunk3-1000000-1050000
- tomekkorbak/pii-pile-chunk3-1050000-1100000
- tomekkorbak/pii-pile-chunk3-1100000-1150000
- tomekkorbak/pii-pile-chunk3-1150000-1200000
- tomekkorbak/pii-pile-chunk3-1200000-1250000
- tomekkorbak/pii-pile-chunk3-1250000-1300000
- tomekkorbak/pii-pile-chunk3-1300000-1350000
- tomekkorbak/pii-pile-chunk3-1350000-1400000
- tomekkorbak/pii-pile-chunk3-1400000-1450000
- tomekkorbak/pii-pile-chunk3-1450000-1500000
- tomekkorbak/pii-pile-chunk3-1500000-1550000
- tomekkorbak/pii-pile-chunk3-1550000-1600000
- tomekkorbak/pii-pile-chunk3-1600000-1650000
- tomekkorbak/pii-pile-chunk3-1650000-1700000
- tomekkorbak/pii-pile-chunk3-1700000-1750000
- tomekkorbak/pii-pile-chunk3-1750000-1800000
- tomekkorbak/pii-pile-chunk3-1800000-1850000
- tomekkorbak/pii-pile-chunk3-1850000-1900000
- tomekkorbak/pii-pile-chunk3-1900000-1950000
model-index:
- name: confident_knuth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# confident_knuth
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'gpt2'},
'objective': {'alpha': 0.5, 'beta': 0.1, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'confident_knuth',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/q3c975dt |
claudfuen/photorealistic-fuen-v1 | claudfuen | 2022-12-06T21:38:04Z | 524 | 89 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"endpoints-template",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-03T14:18:05Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- endpoints-template
inference: true
--- |
utkarshbelkhede/finbert-sec-10K | utkarshbelkhede | 2022-12-06T21:34:10Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T20:45:12Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finbert
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2184
- Accuracy: 0.8947
- F1: 0.7370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 20 | 0.3729 | 0.8647 | 0.4637 |
| No log | 2.0 | 40 | 0.2622 | 0.8647 | 0.5134 |
| No log | 3.0 | 60 | 0.2184 | 0.8947 | 0.7370 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Nhat1904/16-shot-twitter-2classes-new-1 | Nhat1904 | 2022-12-06T21:24:50Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-06T21:24:36Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 160 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 480,
"warmup_steps": 48,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
nidek/ppo-Huggy | nidek | 2022-12-06T21:22:24Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-06T21:22:17Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: nidek/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dirkvg/ppo-LunarLander-v2 | dirkvg | 2022-12-06T21:22:03Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T21:21:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo-mlp
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 120.60 +/- 99.05
name: mean_reward
verified: false
---
# **ppo-mlp** Agent playing **LunarLander-v2**
This is a trained model of a **ppo-mlp** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sayby/PPO-LunarLanderv2 | sayby | 2022-12-06T21:14:00Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T20:46:44Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -57.21 +/- 110.10
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'sayby/PPO-LunarLanderv2'
'batch_size': 512
'minibatch_size': 128}
```
|
EmberrJoel/ppo-LunarLander-v2 | EmberrJoel | 2022-12-06T21:03:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T19:31:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 291.88 +/- 18.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tomekkorbak/nervous_wozniak | tomekkorbak | 2022-12-06T21:02:20Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/pii-pile-chunk3-0-50000",
"dataset:tomekkorbak/pii-pile-chunk3-50000-100000",
"dataset:tomekkorbak/pii-pile-chunk3-100000-150000",
"dataset:tomekkorbak/pii-pile-chunk3-150000-200000",
"dataset:tomekkorbak/pii-pile-chunk3-200000-250000",
"dataset:tomekkorbak/pii-pile-chunk3-250000-300000",
"dataset:tomekkorbak/pii-pile-chunk3-300000-350000",
"dataset:tomekkorbak/pii-pile-chunk3-350000-400000",
"dataset:tomekkorbak/pii-pile-chunk3-400000-450000",
"dataset:tomekkorbak/pii-pile-chunk3-450000-500000",
"dataset:tomekkorbak/pii-pile-chunk3-500000-550000",
"dataset:tomekkorbak/pii-pile-chunk3-550000-600000",
"dataset:tomekkorbak/pii-pile-chunk3-600000-650000",
"dataset:tomekkorbak/pii-pile-chunk3-650000-700000",
"dataset:tomekkorbak/pii-pile-chunk3-700000-750000",
"dataset:tomekkorbak/pii-pile-chunk3-750000-800000",
"dataset:tomekkorbak/pii-pile-chunk3-800000-850000",
"dataset:tomekkorbak/pii-pile-chunk3-850000-900000",
"dataset:tomekkorbak/pii-pile-chunk3-900000-950000",
"dataset:tomekkorbak/pii-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/pii-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/pii-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/pii-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/pii-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/pii-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/pii-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/pii-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/pii-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/pii-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/pii-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/pii-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/pii-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/pii-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/pii-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/pii-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/pii-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/pii-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/pii-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/pii-pile-chunk3-1900000-1950000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-05T18:00:30Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/pii-pile-chunk3-0-50000
- tomekkorbak/pii-pile-chunk3-50000-100000
- tomekkorbak/pii-pile-chunk3-100000-150000
- tomekkorbak/pii-pile-chunk3-150000-200000
- tomekkorbak/pii-pile-chunk3-200000-250000
- tomekkorbak/pii-pile-chunk3-250000-300000
- tomekkorbak/pii-pile-chunk3-300000-350000
- tomekkorbak/pii-pile-chunk3-350000-400000
- tomekkorbak/pii-pile-chunk3-400000-450000
- tomekkorbak/pii-pile-chunk3-450000-500000
- tomekkorbak/pii-pile-chunk3-500000-550000
- tomekkorbak/pii-pile-chunk3-550000-600000
- tomekkorbak/pii-pile-chunk3-600000-650000
- tomekkorbak/pii-pile-chunk3-650000-700000
- tomekkorbak/pii-pile-chunk3-700000-750000
- tomekkorbak/pii-pile-chunk3-750000-800000
- tomekkorbak/pii-pile-chunk3-800000-850000
- tomekkorbak/pii-pile-chunk3-850000-900000
- tomekkorbak/pii-pile-chunk3-900000-950000
- tomekkorbak/pii-pile-chunk3-950000-1000000
- tomekkorbak/pii-pile-chunk3-1000000-1050000
- tomekkorbak/pii-pile-chunk3-1050000-1100000
- tomekkorbak/pii-pile-chunk3-1100000-1150000
- tomekkorbak/pii-pile-chunk3-1150000-1200000
- tomekkorbak/pii-pile-chunk3-1200000-1250000
- tomekkorbak/pii-pile-chunk3-1250000-1300000
- tomekkorbak/pii-pile-chunk3-1300000-1350000
- tomekkorbak/pii-pile-chunk3-1350000-1400000
- tomekkorbak/pii-pile-chunk3-1400000-1450000
- tomekkorbak/pii-pile-chunk3-1450000-1500000
- tomekkorbak/pii-pile-chunk3-1500000-1550000
- tomekkorbak/pii-pile-chunk3-1550000-1600000
- tomekkorbak/pii-pile-chunk3-1600000-1650000
- tomekkorbak/pii-pile-chunk3-1650000-1700000
- tomekkorbak/pii-pile-chunk3-1700000-1750000
- tomekkorbak/pii-pile-chunk3-1750000-1800000
- tomekkorbak/pii-pile-chunk3-1800000-1850000
- tomekkorbak/pii-pile-chunk3-1850000-1900000
- tomekkorbak/pii-pile-chunk3-1900000-1950000
model-index:
- name: nervous_wozniak
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nervous_wozniak
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'nervous_wozniak',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/qjc0jrdx |
GIanlucaRub/whisper-tiny-it-4 | GIanlucaRub | 2022-12-06T20:47:21Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T15:24:40Z | ---
language:
- it
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny it 4
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: it
split: test[:10%]
args: 'config: it, split: test'
metrics:
- name: Wer
type: wer
value: 41.3546866333888
---
# Whisper Tiny it 4
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7126
- Wer: 41.3547
## Model description
This model is the openai whisper small transformer adapted for Italian audio to text transcription. This model has weight decay set to 0.1 to cope with overfitting. The learning rate has been set to 5e-5 in the hyperparameter tuning process and it improved the performance on the evaluation set.
## Intended uses & limitations
The model is available through its [HuggingFace web app](https://huggingface.co/spaces/GIanlucaRub/whisper-it)
## Training and evaluation data
Data used for training is the initial 10% of train and validation of [Italian Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/it/train) 11.0 from Mozilla Foundation.
The dataset used for evaluation is the initial 10% of test of Italian Common Voice.
## Training procedure
After loading the pre trained model, it has been trained on the dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5919 | 0.95 | 1000 | 0.8049 | 56.4823 |
| 0.3181 | 1.91 | 2000 | 0.7393 | 44.8142 |
| 0.1417 | 2.86 | 3000 | 0.7067 | 42.7482 |
| 0.0627 | 3.82 | 4000 | 0.7126 | 41.3547 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Nhat1904/16-shot-twitter-2classes | Nhat1904 | 2022-12-06T20:46:58Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-06T20:46:46Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 128 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 384,
"warmup_steps": 39,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
muhtasham/whisper-small-tr | muhtasham | 2022-12-06T20:46:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard, whisper-small, mozilla-foundation/common_voice_11_0, turkish, whisper-event",
"generated_from_trainer",
"tr",
"dataset:mozilla-foundation/common_voice_11_0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T06:15:26Z | ---
language:
- tr
tags:
- hf-asr-leaderboard, whisper-small, mozilla-foundation/common_voice_11_0, turkish,
whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Tt - Muhtesem
results: []
--- |
zipp425/synthwavePunk | zipp425 | 2022-12-06T20:45:01Z | 9 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-02T22:06:57Z | ---
license: creativeml-openrail-m
---

Example Prompts available on [Civitai](https://civitai.com/models/1102/synthwavepunk) |
GIanlucaRub/whisper-tiny-it-2 | GIanlucaRub | 2022-12-06T20:37:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T08:24:42Z | ---
language:
- it
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny It 2 - Gianluca Ruberto
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: it
split: test[:10%]
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 43.392956184137546
---
# Whisper Tiny It 2 - Gianluca Ruberto
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.711485
- Wer: 43.392956
## Model description
This model is the openai whisper small transformer adapted for Italian audio to text transcription. This model has weight decay set to 0.3 to cope with overfitting.
## Intended uses & limitations
The model is available through its [HuggingFace web app](https://huggingface.co/spaces/GIanlucaRub/whisper-it)
## Training and evaluation data
Data used for training is the initial 10% of train and validation of [Italian Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/it/train) 11.0 from Mozilla Foundation.
The dataset used for evaluation is the initial 10% of test of Italian Common Voice.
Unfortunately weight decay showed to have slightly worse result also on the evaluation dataset.
## Training procedure
After loading the pre trained model, it has been trained on the dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
- weight_decay: 0.3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5837 | 0.95 | 1000 | 0.790046 | 50.6032 |
| 0.4186 | 1.91 | 2000 | 0.730115 | 46.0067 |
| 0.3154 | 2.86 | 3000 | 0.712776 | 44.114 |
| 0.2676 | 3.82 | 4000 | 0.711485 | 43.393 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
utkarshbelkhede/distilbert-sec-10K | utkarshbelkhede | 2022-12-06T20:33:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T18:30:41Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst-2-english
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2092
- Accuracy: 0.9323
- F1: 0.8258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 20 | 0.2163 | 0.9398 | 0.8579 |
| No log | 2.0 | 40 | 0.2091 | 0.9323 | 0.8258 |
| No log | 3.0 | 60 | 0.2092 | 0.9323 | 0.8258 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Pi3141/DialoGPT-small-elon | Pi3141 | 2022-12-06T20:21:16Z | 20 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-05T23:22:12Z | ---
tags:
- conversational
---
# DialoGPT model that talks like Elon Musk
Trained on Twitter tweets by Elon Musk. This model will spew meaningless shit about 90% of the time.
Also a very stupid AI |
adachandesu/distilbert-base-uncased-finetuned-PN | adachandesu | 2022-12-06T20:15:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-02T03:16:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-PN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-PN
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
- Accuracy: 0.9479
- F1: 0.9057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 42 | 0.2918 | 0.9167 | 0.8571 |
| No log | 2.0 | 84 | 0.2653 | 0.9062 | 0.8163 |
| No log | 3.0 | 126 | 0.2139 | 0.9479 | 0.9057 |
| No log | 4.0 | 168 | 0.2317 | 0.9167 | 0.8462 |
| No log | 5.0 | 210 | 0.2492 | 0.9062 | 0.8235 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
amal94/ppo-Huggy | amal94 | 2022-12-06T20:09:05Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-06T20:08:59Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: amal94/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sergio-ortiz/ppo-Huggy | sergio-ortiz | 2022-12-06T19:55:59Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-06T19:55:51Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: sergio-ortiz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yzhou286/mBio-finetuned-setfit-model | yzhou286 | 2022-12-06T19:53:00Z | 3 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"albert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-30T20:58:15Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 4.326394417589792e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 30,
"warmup_steps": 3,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
amy-why/vit-base-beans-demo-v5 | amy-why | 2022-12-06T19:51:51Z | 29 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-06T06:56:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans-demo-v5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0466
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1519 | 1.54 | 100 | 0.1535 | 0.9474 |
| 0.0447 | 3.08 | 200 | 0.0466 | 0.9850 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
gamallo/gpt2-galician-alpha | gamallo | 2022-12-06T19:20:49Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-06T16:00:40Z | ---
widget:
- text: "Mariano Rajoy é"
example_title: "Rajoy"
- text: "Francisco Franco foi un dictador de"
example_title: "Franco"
- text: "Rosalía de Castro foi"
example_title: "Rosalia"
- text: "Xosé Manuel Beiras dixo que"
example_title: "Beiras"
---
# GPT-2-Galician (alpha version)
<!-- Provide a quick summary of what the model is/does. [Optional] -->
Model trained with 1.5G Galician texts.
# Model Details
* Model type: Language model
* Language: gl
* License: cc0-1.0
* Libraries: SimpleTransformers, Pytorch |
burakyldrm/wav2vec2-burak-new-300-v2-8 | burakyldrm | 2022-12-06T19:16:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T06:28:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-burak-new-300-v2-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak-new-300-v2-8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2841
- Wer: 0.2120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 151
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.0739 | 9.43 | 500 | 3.1506 | 1.0 |
| 1.6652 | 18.87 | 1000 | 0.3396 | 0.4136 |
| 0.4505 | 28.3 | 1500 | 0.2632 | 0.3138 |
| 0.3115 | 37.74 | 2000 | 0.2536 | 0.2849 |
| 0.2421 | 47.17 | 2500 | 0.2674 | 0.2588 |
| 0.203 | 56.6 | 3000 | 0.2552 | 0.2471 |
| 0.181 | 66.04 | 3500 | 0.2636 | 0.2595 |
| 0.1581 | 75.47 | 4000 | 0.2527 | 0.2416 |
| 0.1453 | 84.91 | 4500 | 0.2773 | 0.2257 |
| 0.1305 | 94.34 | 5000 | 0.2825 | 0.2257 |
| 0.1244 | 103.77 | 5500 | 0.2754 | 0.2312 |
| 0.1127 | 113.21 | 6000 | 0.2772 | 0.2223 |
| 0.1094 | 122.64 | 6500 | 0.2720 | 0.2223 |
| 0.1033 | 132.08 | 7000 | 0.2863 | 0.2202 |
| 0.099 | 141.51 | 7500 | 0.2853 | 0.2140 |
| 0.0972 | 150.94 | 8000 | 0.2841 | 0.2120 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sergio-ortiz/ppo-LunarLander-v2 | sergio-ortiz | 2022-12-06T18:57:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T18:57:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.38 +/- 21.95
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Atenrev/ppo-Huggy | Atenrev | 2022-12-06T18:29:05Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-06T18:28:59Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Atenrev/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sppm/results | sppm | 2022-12-06T18:28:51Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T18:10:40Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 223
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
marvmk/whisper-small-gl | marvmk | 2022-12-06T18:00:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"gl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-01T18:16:20Z | ---
language:
- gl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small gl - Galician
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small gl - Galician
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
yzhou286/mBio-setfit-model | yzhou286 | 2022-12-06T17:53:03Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-29T04:31:46Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
zlicastro/zl-ppo-lunar-lander-v2 | zlicastro | 2022-12-06T17:48:10Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T17:11:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.88 +/- 12.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
utkarshbelkhede/distilbert-base-cased | utkarshbelkhede | 2022-12-06T17:44:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T17:36:18Z | ---
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 20 | 0.3885 | 0.8647 | 0.4637 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
dung1308/RM_system_not_mixed__NLP_model_90_10_CPU_2_epochs | dung1308 | 2022-12-06T17:21:14Z | 3 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-06T09:49:52Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: dung1308/RM_system_not_mixed__NLP_model_90_10_CPU_2_epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dung1308/RM_system_not_mixed__NLP_model_90_10_CPU_2_epochs
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.2989
- Validation Loss: 4.2424
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -275, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.1315 | 4.5299 | 0 |
| 4.2989 | 4.2424 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.7.0
- Tokenizers 0.11.0
|
nidek/ppo-LunarLander-v2 | nidek | 2022-12-06T16:52:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T16:52:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.07 +/- 14.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
4mosot/ppo-Huggy | 4mosot | 2022-12-06T16:39:23Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-06T16:39:15Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: 4mosot/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kaiosinimbu/dqn-SpaceInvadersNoFrameskip-v4 | kaiosinimbu | 2022-12-06T16:37:09Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T15:50:06Z | ---
library_name: stable-baselines3
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
metrics:
- type: mean_reward
value: 47.20 +/- 13.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **BreakoutNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BreakoutNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga kaiosinimbu -f logs/
python enjoy.py --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga kaiosinimbu -f logs/
rl_zoo3 enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ -orga kaiosinimbu
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Ari/whisper-small-es | Ari | 2022-12-06T16:05:41Z | 12 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"es",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T09:17:33Z | ---
language:
- es
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: whisper-small-es - Ari
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-es - Ari
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2218
- eval_wer: 8.6904
- eval_runtime: 4999.6051
- eval_samples_per_second: 3.104
- eval_steps_per_second: 0.388
- epoch: 0.13
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 7500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
4mosot/ppo-LunarLander-v2 | 4mosot | 2022-12-06T15:45:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T15:45:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.30 +/- 20.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
arb9p4/ppo-Huggy | arb9p4 | 2022-12-06T15:30:17Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-06T15:30:09Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: arb9p4/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ethancoco/ppo-LunarLander-v2 | ethancoco | 2022-12-06T15:21:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T15:21:18Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.52 +/- 24.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Gorenzelg/bert-finetuned-squad11 | Gorenzelg | 2022-12-06T15:00:03Z | 5 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-06T13:30:14Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Gorenzelg/bert-finetuned-squad11
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Gorenzelg/bert-finetuned-squad11
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0664
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 55450, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.0664 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.1
- Datasets 2.6.1
- Tokenizers 0.11.0
|
Nadav/camembert-base-squad-fr | Nadav | 2022-12-06T14:34:10Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-06T13:00:04Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: camembert-base-squad-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-squad-fr
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7504 | 1.0 | 3581 | 1.6470 |
| 1.4776 | 2.0 | 7162 | 1.5182 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Nadav/bert-base-french-europeana-cased-squad-fr | Nadav | 2022-12-06T14:33:37Z | 41 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-06T13:03:24Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-base-french-europeana-cased-squad-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-french-europeana-cased-squad-fr
This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9069 | 1.0 | 3539 | 1.7853 |
| 1.6263 | 2.0 | 7078 | 1.7031 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Jellevdl/Bert-test-model | Jellevdl | 2022-12-06T14:26:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-06T13:49:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: Bert-test-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-test-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 1.7369 |
| 2.2639 | 2.0 | 500 | 1.3940 |
| 2.2639 | 3.0 | 750 | 1.3708 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
m-aliabbas/idrak_wav2vec_timit_subsample | m-aliabbas | 2022-12-06T13:57:17Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-06T13:51:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: idrak_wav2vec_timit_subsample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idrak_wav2vec_timit_subsample
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
tomthefreak/Macro-Terror | tomthefreak | 2022-12-06T13:34:26Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-12-06T12:38:55Z | ---
license: creativeml-openrail-m
---
Science Fiction/Horror monster textual embedding for Stable Diffusion 2.0.
This embedding is trained initially on 49 images from Tod Ryan's Artstation (https://www.artstation.com/todryan), then further tuned with an expanded dataset that includes 119 additional images generated with the initial embedding alongside specific prompting tailored to improving the quality. These generated training images were color graded collectively to mimic the visual aesthetic of modern horror media.
I have also included the initial version of the embedding that circulated on the Stable Diffusion discord. It is excellent for disgusting (but repetitive) monster/grossness.
Example generations:

_Prompt: Macro Terror, Steps: 15, Sampler: DPM++ SDE Karras, CFG scale: 3.5, Seed: 2889499141, Size: 768x768, Model hash: 2c02b20a_

_Prompt: Macro Terror, Steps: 15, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2324809867, Size: 768x768, Model hash: 2c02b20a_

_Prompt: Macro Terror, Steps: 15, Sampler: DPM++ 2S a, CFG scale: 5, Seed: 2276531391, Size: 768x768, Model hash: 2c02b20a_ |
Desak/distilbert-base-uncased-finetuned-squad | Desak | 2022-12-06T13:19:10Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-06T06:24:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2221 | 1.0 | 5533 | 1.1611 |
| 0.9677 | 2.0 | 11066 | 1.1226 |
| 0.7567 | 3.0 | 16599 | 1.1539 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
teddy322/wav2vec2-large-xls-r-300m-kor-11385 | teddy322 | 2022-12-06T12:58:50Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:zeroth_korean_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-03T08:27:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean_asr
model-index:
- name: wav2vec2-large-xls-r-300m-kor-11385
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kor-11385
This model is a fine-tuned version of [teddy322/wav2vec2-large-xls-r-300m-kor-11385](https://huggingface.co/teddy322/wav2vec2-large-xls-r-300m-kor-11385) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4033
- Wer: 0.2805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0502 | 1.97 | 400 | 0.4049 | 0.3283 |
| 0.0631 | 3.94 | 800 | 0.4618 | 0.3260 |
| 0.0508 | 5.91 | 1200 | 0.4391 | 0.3170 |
| 0.0325 | 7.88 | 1600 | 0.4138 | 0.2935 |
| 0.0244 | 9.85 | 2000 | 0.4033 | 0.2805 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
rheyaas/distilbert-base-uncased-finetuned-squad | rheyaas | 2022-12-06T12:51:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-06T07:21:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2167 | 1.0 | 5533 | 1.1654 |
| 0.9559 | 2.0 | 11066 | 1.1209 |
| 0.7532 | 3.0 | 16599 | 1.1576 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AigizK/whisper-small-bak | AigizK | 2022-12-06T12:22:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"whisper-event",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-05T17:26:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
- whisper-event
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: openai/whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: ba
split: test
args: ba
metrics:
- name: Wer
type: wer
value: 20.90095725311917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2116
- Wer: 20.9010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2362 | 0.2 | 1000 | 0.3219 | 35.4541 |
| 0.1566 | 1.04 | 2000 | 0.2583 | 27.1784 |
| 0.1325 | 1.24 | 3000 | 0.2447 | 24.9120 |
| 0.129 | 2.07 | 4000 | 0.2217 | 22.3117 |
| 0.1375 | 2.27 | 5000 | 0.2116 | 20.9010 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Zeldron/Silverhand | Zeldron | 2022-12-06T12:08:09Z | 11 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-10-30T15:55:28Z | This is the first Silverhand model. |
RASMUS/whisper-small-fi-15k_samples | RASMUS | 2022-12-06T12:06:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-small",
"mozilla-foundation/common_voice_11_0",
"finnish",
"whisper-event",
"fi",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-05T12:43:07Z | ---
language: fi
tags:
- whisper-small
- mozilla-foundation/common_voice_11_0
- finnish
- whisper-event
metrics: wer
license: creativeml-openrail-m
---
|
Neprox/STT-swedish-lr-decay-attentiondropout-model | Neprox | 2022-12-06T12:05:23Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-05T16:36:55Z | ---
language:
- sv
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small - Swedish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small - Swedish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4326
- Wer: 27.5604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2166 | 1.3 | 1000 | 0.4678 | 32.2067 |
| 0.0991 | 2.59 | 2000 | 0.4459 | 29.0581 |
| 0.0462 | 3.89 | 3000 | 0.4326 | 27.5604 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
wajahat/distilbert-base-uncased-finetuned-emotion | wajahat | 2022-12-06T11:53:45Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:55:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2203
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8038 | 1.0 | 250 | 0.3028 | 0.913 | 0.9115 |
| 0.246 | 2.0 | 500 | 0.2203 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.5.1
- Tokenizers 0.11.6
|
fluorine/sd-class-butterflies-32 | fluorine | 2022-12-06T11:52:14Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-06T11:51:53Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('fluorine/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Victorlopo21/whisper-small-gl | Victorlopo21 | 2022-12-06T11:26:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"gl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-04T10:22:34Z | ---
language:
- gl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small gl - Galician
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small gl - Galician
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
dorothyisnotarobot/papercute | dorothyisnotarobot | 2022-12-06T11:19:21Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-12-06T08:43:21Z | ---
license: bigscience-openrail-m
---
# Papercute #
Papercute100k is an intentionally overfitted fine-tuning checkpoint for Stable Diffusion 1.x that lets you to create adorable papercut images using either txt2img or img2img prompts. The Papercute100k checkpoint was trained on 141 curated and captioned images over 100,000 steps. Training took approximately 11hrs on an Nvidia 3090ti.
Tips for best results
- No keyword is necessary to engage the model. Just load the checkpoint and enter your prompt.
- Works best at around 50 steps
Known Limitations:
- Does not work well with PLMS sampling
- Does not do well with complicated scenes
- Big cats tend to look like doilies. This is due to a particular training image being overempahasized. This bug/feature has been popular so I left it in.
### Sample images from Papercute100k: ###
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/space-cowboy.png" alt="See ya, space cowboy">
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/yellow-submarine.png" alt="We all live in a yellow submarine">
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/forest.png" alt="A forest">
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/a-woman.png" alt="A woman with long hair">
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/cat.png" alt="A cat that looks like a doilie">
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/fox.png" alt="Fox">
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/bunny.png" alt="Bunny">
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/uniform.png" alt="A man wearing a red uniform">
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/wolf.png" alt="Wolf">
<img src="https://huggingface.co/dorothyisnotarobot/Papercute/resolve/main/examples/reindeer.png" alt="Reindeer"> |
pratultandon/recipe-nlg-gpt2-ingredient-fixer | pratultandon | 2022-12-06T10:50:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-06T04:32:44Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: recipe-nlg-gpt2-ingredient-fixer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-nlg-gpt2-ingredient-fixer
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sun1638650145/ML-Agents-Pyramids | sun1638650145 | 2022-12-06T10:29:35Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2022-12-06T10:29:27Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: sun1638650145/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ibm-research/ColD-Fusion-itr0-seed4 | ibm-research | 2022-12-06T10:28:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:28:15Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr0-seed3 | ibm-research | 2022-12-06T10:28:13Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:28:04Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr0-seed2 | ibm-research | 2022-12-06T10:28:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:27:53Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr0-seed1 | ibm-research | 2022-12-06T10:27:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:27:43Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr0-seed0 | ibm-research | 2022-12-06T10:27:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:27:30Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr8-seed4 | ibm-research | 2022-12-06T10:27:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:27:16Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr8-seed3 | ibm-research | 2022-12-06T10:27:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:27:03Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr8-seed2 | ibm-research | 2022-12-06T10:26:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:26:34Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr8-seed1 | ibm-research | 2022-12-06T10:26:33Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:26:10Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr7-seed3 | ibm-research | 2022-12-06T10:25:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:25:38Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
jairNeto/ppo-LunarLander-v2 | jairNeto | 2022-12-06T10:25:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-06T10:24:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.09 +/- 15.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ibm-research/ColD-Fusion-itr6-seed2 | ibm-research | 2022-12-06T10:24:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:24:07Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr6-seed0 | ibm-research | 2022-12-06T10:24:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:23:53Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr5-seed4 | ibm-research | 2022-12-06T10:23:36Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:23:25Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr4-seed2 | ibm-research | 2022-12-06T10:21:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:21:37Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr4-seed0 | ibm-research | 2022-12-06T10:21:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:21:18Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr4-seed1 | ibm-research | 2022-12-06T10:21:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:21:03Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr3-seed3 | ibm-research | 2022-12-06T10:20:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:20:26Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr3-seed2 | ibm-research | 2022-12-06T10:20:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:20:11Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr3-seed0 | ibm-research | 2022-12-06T10:19:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:19:38Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
ibm-research/ColD-Fusion-itr29-seed1 | ibm-research | 2022-12-06T10:18:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"exbert",
"en",
"arxiv:2212.01378",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-06T10:18:09Z | ---
language: en
tags:
- exbert
license: mit
---
# ColD Fusion model
Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets.
Full details at [this paper](https://arxiv.org/abs/2212.01378).
## Paper Abstract:
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a
mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now,
massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources
that are only available to well-resourced teams.
In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed
computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic
loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that
ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on
all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find
ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets,
ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
### How to use
Best way to use is to finetune on your own task, but you can also extract features directly.
To get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = RobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion')
model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html)
When fine-tuned on downstream tasks, this model achieves the following results:
### BibTeX entry and citation info
```bibtex
@article{ColDFusion,
author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and},
title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning},
journal = {CoRR},
volume = {abs/2212.01378},
year = {2022},
url = {https://arxiv.org/abs/2212.01378},
archivePrefix = {arXiv},
eprint = {2212.01378},
}
```
<a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Subsets and Splits