modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 18:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 18:25:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lenartlola/rick-bot | lenartlola | 2022-12-15T14:02:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-15T14:00:20Z | ---
tags:
- conversational
--- |
J3/ppo-Huggy | J3 | 2022-12-15T13:57:38Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T13:57:27Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: J3/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Tereveni-AI/gpt2-124M-uk-fiction | Tereveni-AI | 2022-12-15T13:49:41Z | 30 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"uk",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: uk
widget:
- text: "Но зла Юнона, суча дочка, "
tags:
- text-generation
---
Note: **default code snippet above won't work** because we are using `AlbertTokenizer` with `GPT2LMHeadModel`, see [issue](https://github.com/huggingface/transformers/issues/4285).
## GPT2 124M Trained on Ukranian Fiction
### Training details
Model was trained on corpus of 4040 fiction books, 2.77 GiB in total.
Evaluation on [brown-uk](https://github.com/brown-uk/corpus) gives perplexity of 50.16.
### Example usage:
```python
from transformers import AlbertTokenizer, GPT2LMHeadModel
tokenizer = AlbertTokenizer.from_pretrained("Tereveni-AI/gpt2-124M-uk-fiction")
model = GPT2LMHeadModel.from_pretrained("Tereveni-AI/gpt2-124M-uk-fiction")
input_ids = tokenizer.encode("Но зла Юнона, суча дочка,", add_special_tokens=False, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
num_return_sequences=3,
max_length=50
)
for i, out in enumerate(outputs):
print("{}: {}".format(i, tokenizer.decode(out)))
```
Prints something like this:
```bash
0: Но зла Юнона, суча дочка, яка затьмарила всі її таємниці: І хто з'їсть її душу, той помре». І, не дочекавшись гніву богів, посунула в пітьму, щоб не бачити перед собою. Але, за
1: Но зла Юнона, суча дочка, і довела мене до божевілля. Але він не знав нічого. Після того як я його побачив, мені стало зле. Я втратив рівновагу. Але в мене не було часу на роздуми. Я вже втратив надію
2: Но зла Юнона, суча дочка, не нарікала нам! — раптом вигукнула Юнона. — Це ти, старий йолопе! — мовила вона, не перестаючи сміятись. — Хіба ти не знаєш, що мені подобається ходити з тобою?
``` |
andresgtn/ppo-LunarLander-v2 | andresgtn | 2022-12-15T13:47:07Z | 0 | 1 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T13:46:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 222.99 +/- 15.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ntinosmg/q-FrozenLake-v1-4x4-noSlippery | ntinosmg | 2022-12-15T13:35:08Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-08-13T18:33:02Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.61 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ntinosmg/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Ari/whisper-small-af-za | Ari | 2022-12-15T13:33:47Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"af",
"dataset:google/fleurs",
"dataset:openslr/SLR32",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-13T13:37:08Z | ---
language:
- af
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- google/fleurs
- openslr/SLR32
model-index:
- name: whisper-small-af-za - Ari
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Wer
type: wer
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-af-za - Ari
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0002
- eval_wer: 0.0
- eval_runtime: 77.0592
- eval_samples_per_second: 2.569
- eval_steps_per_second: 0.324
- epoch: 14.6
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
tomekkorbak/eloquent_stallman | tomekkorbak | 2022-12-15T13:27:52Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-12-15T13:27:41Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: eloquent_stallman
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eloquent_stallman
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 25000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00078,
'is_split_by_sentences': True,
'skip_tokens': 1661599744},
'generation': {'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'max_tokens': 64, 'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '81a1701e025d2c65ae6e8c2103df559071523ee0'},
'path_or_name': 'tomekkorbak/goofy_pasteur'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'eloquent_stallman',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1661599744,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/3o0evgmf |
Dreamoon/ppo-LunarLander-v2 | Dreamoon | 2022-12-15T13:21:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-09T20:34:55Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -64.07 +/- 67.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LucianoDeben/ppo-Huggy | LucianoDeben | 2022-12-15T13:10:35Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T13:10:21Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: LucianoDeben/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gabrieleai/gamindocar-2000-700 | gabrieleai | 2022-12-15T13:06:17Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-15T13:04:11Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Gamindocar-2000-700 Dreambooth model trained by gabrieleai with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
ibadrehman/ppo-Huggy-01 | ibadrehman | 2022-12-15T12:40:16Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T12:40:09Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ibadrehman/ppo-Huggy-01
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bvk1ng/bipedal_walker_ppo | bvk1ng | 2022-12-15T12:39:15Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T12:34:54Z | ---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: Proximal Policy Optimisation (PPO)
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: 209.63 +/- 82.30
name: mean_reward
verified: false
---
# **Proximal Policy Optimisation (PPO)** Agent playing **BipedalWalker-v3**
This is a trained model of a **Proximal Policy Optimisation (PPO)** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SRM47/gpt2-large-paraphraser | SRM47 | 2022-12-15T12:33:15Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-15T10:50:36Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-large-paraphraser
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-paraphraser
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
scikit-learn/blog-example | scikit-learn | 2022-12-15T12:31:24Z | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"region:us"
] | tabular-classification | 2022-12-15T12:21:52Z | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_file: model.pkl
widget:
structuredData:
area_mean:
- 407.4
- 1335.0
- 428.0
area_se:
- 26.99
- 77.02
- 17.12
area_worst:
- 508.9
- 1946.0
- 546.3
compactness_mean:
- 0.05991
- 0.1076
- 0.069
compactness_se:
- 0.01065
- 0.01895
- 0.01727
compactness_worst:
- 0.1049
- 0.3055
- 0.188
concave points_mean:
- 0.02069
- 0.08941
- 0.01393
concave points_se:
- 0.009175
- 0.01232
- 0.006747
concave points_worst:
- 0.06544
- 0.2112
- 0.06913
concavity_mean:
- 0.02638
- 0.1527
- 0.02669
concavity_se:
- 0.01245
- 0.02681
- 0.02045
concavity_worst:
- 0.08105
- 0.4159
- 0.1471
fractal_dimension_mean:
- 0.05934
- 0.05478
- 0.06057
fractal_dimension_se:
- 0.001461
- 0.001711
- 0.002922
fractal_dimension_worst:
- 0.06487
- 0.07055
- 0.07993
perimeter_mean:
- 73.28
- 134.8
- 75.51
perimeter_se:
- 2.684
- 4.119
- 1.444
perimeter_worst:
- 83.12
- 166.8
- 85.22
radius_mean:
- 11.5
- 20.64
- 11.84
radius_se:
- 0.3927
- 0.6137
- 0.2222
radius_worst:
- 12.97
- 25.37
- 13.3
smoothness_mean:
- 0.09345
- 0.09446
- 0.08871
smoothness_se:
- 0.00638
- 0.006211
- 0.005517
smoothness_worst:
- 0.1183
- 0.1562
- 0.128
symmetry_mean:
- 0.1834
- 0.1571
- 0.1533
symmetry_se:
- 0.02292
- 0.01276
- 0.01616
symmetry_worst:
- 0.274
- 0.2689
- 0.2535
texture_mean:
- 18.45
- 17.35
- 18.94
texture_se:
- 0.8429
- 0.6575
- 0.8652
texture_worst:
- 22.46
- 23.17
- 24.99
---
# Model description
This is a Logistic Regression trained on breast cancer dataset.
## Intended uses & limitations
This model is trained for educational purposes.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|-----------------------------------------------------------------|
| memory | |
| steps | [('scaler', StandardScaler()), ('model', LogisticRegression())] |
| verbose | False |
| scaler | StandardScaler() |
| model | LogisticRegression() |
| scaler__copy | True |
| scaler__with_mean | True |
| scaler__with_std | True |
| model__C | 1.0 |
| model__class_weight | |
| model__dual | False |
| model__fit_intercept | True |
| model__intercept_scaling | 1 |
| model__l1_ratio | |
| model__max_iter | 100 |
| model__multi_class | auto |
| model__n_jobs | |
| model__penalty | l2 |
| model__random_state | |
| model__solver | lbfgs |
| model__tol | 0.0001 |
| model__verbose | 0 |
| model__warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 {color: black;background-color: white;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 pre{padding: 0;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-toggleable {background-color: white;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-estimator:hover {background-color: #d4ebff;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-item {z-index: 1;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-parallel-item:only-child::after {width: 0;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152 div.sk-text-repr-fallback {display: none;}</style><div id="sk-5b6643ea-0cef-4d0c-8389-2cf071bf6152" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('scaler', StandardScaler()), ('model', LogisticRegression())])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="76a688ab-e260-4cf7-a9f2-bf77900be27c" type="checkbox" ><label for="76a688ab-e260-4cf7-a9f2-bf77900be27c" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('scaler', StandardScaler()), ('model', LogisticRegression())])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="6a4fcd10-6b63-40a6-a848-13717b9f7c82" type="checkbox" ><label for="6a4fcd10-6b63-40a6-a848-13717b9f7c82" class="sk-toggleable__label sk-toggleable__label-arrow">StandardScaler</label><div class="sk-toggleable__content"><pre>StandardScaler()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="974bd93d-19db-4a61-b7ff-66d07e5bbadb" type="checkbox" ><label for="974bd93d-19db-4a61-b7ff-66d07e5bbadb" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression()</pre></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.965035 |
| f1 score | 0.965035 |
# How to Get Started with the Model
Use the code below to get started with the model.
```python
import joblib
import json
import pandas as pd
clf = joblib.load(model.pkl)
with open("config.json") as f:
config = json.load(f)
clf.predict(pd.DataFrame.from_dict(config["sklearn"]["example_input"]))
```
# Additional Content
## Confusion Matrix
 |
MarcusAGray/q-FrozenLake-v1-8x8-noSlippery | MarcusAGray | 2022-12-15T12:24:56Z | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T12:19:13Z | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="MarcusAGray/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
emmashe15/ppo-LunarLander-v2-TEST | emmashe15 | 2022-12-15T12:10:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T11:54:29Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.61 +/- 21.13
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
supermy/c2m-mt5 | supermy | 2022-12-15T12:07:27Z | 4 | 3 | transformers | [
"transformers",
"pytorch",
"mt5",
"feature-extraction",
"zh",
"dataset:c2m",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-12-15T11:31:14Z | ---
language: zh
datasets: c2m
inference:
parameters:
max_length: 108
num_return_sequences: 1
do_sample: True
widget:
- text: "晋太元中,武陵人捕鱼为业。缘溪行,忘路之远近。忽逢桃花林,夹岸数百步,中无杂树,芳草鲜美,落英缤纷。渔人甚异之,复前行,欲穷其林。林尽水源,便得一山,山有小口,仿佛若有光。便舍船,从口入。初极狭,才通人。复行数十步,豁然开朗。土地平旷,屋舍俨然,有良田、美池、桑竹之属。阡陌交通,鸡犬相闻。其中往来种作,男女衣着,悉如外人。黄发垂髫,并怡然自乐。"
example_title: "桃花源记"
- text: "往者不可谏,来者犹可追。"
example_title: "来者犹可追"
- text: "逝者如斯夫!不舍昼夜。"
example_title: "逝者如斯夫"
---
# 文言文 to 现代文
## Model description
## How to use
使用 pipeline 调用模型:
```python
>>> from transformers import pipeline
>>> model_checkpoint = "supermy/c2m-mt5"
>>> translator = pipeline("translation",
model=model_checkpoint,
num_return_sequences=1,
max_length=52,
truncation=True,)
>>> translator("往者不可谏,来者犹可追。")
[{'translation_text': '过 去 的 事 情 不能 劝 谏 , 未来 的 事 情 还 可以 追 回 来 。 如 果 过 去 的 事 情 不能 劝 谏 , 那 么 , 未来 的 事 情 还 可以 追 回 来 。 如 果 过 去 的 事 情'}]
>>> translator("福兮祸所伏,祸兮福所倚。",do_sample=True)
[{'translation_text': '幸 福 是 祸 患 所 隐 藏 的 , 灾 祸 是 福 祸 所 依 托 的 。 这 些 都 是 幸 福 所 依 托 的 。 这 些 都 是 幸 福 所 带 来 的 。 幸 福 啊 , 也 是 幸 福'}]
>>> translator("成事不说,遂事不谏,既往不咎。", num_return_sequences=1,do_sample=True)
[{'translation_text': '事 情 不 高 兴 , 事 情 不 劝 谏 , 过 去 的 事 就 不 会 责 怪 。 事 情 没 有 多 久 了 , 事 情 没 有 多 久 , 事 情 没 有 多 久 了 , 事 情 没 有 多'}]
>>> translator("逝者如斯夫!不舍昼夜。",num_return_sequences=1,max_length=30)
[{'translation_text': '逝 去 的 人 就 像 这 样 啊 , 不分 昼夜 地 去 追 赶 它 们 。 这 样 的 人 就 不 会 忘 记'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("supermy/c2m-mt5")
model = AutoModelForSeq2SeqLM.from_pretrained("supermy/c2m-mt5")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training data
非常全的文言文(古文)-现代文平行语料,基本涵盖了大部分经典古籍著作。
原始爬取的数据是篇章级对齐,经过脚本分句(按照句号分号感叹号问号划分)以及人工校对,形成共计约96万句对。目录bitext下是文言文-现代文对齐的平行数据。此外,目录source下是文言文单语数据,target下是现代文单语数据,这两个目录下的文件内容按行对齐。
以下为数据统计信息。其中,短篇章中包括了《论语》、《孟子》、《左传》等篇幅较短的古籍,已和《资治通鉴》合并。
|书名|句数
|:--|:--|
短篇章和资治通鉴|348727
元史|21182
北史|25823
北书|10947
南史|13838
南齐书|13137
史记|17701
后汉书|17753
周书|14930
太平广记|59358
宋书|23794
宋史|77853
徐霞客游记|22750
新五代史|10147
新唐书|12359
旧五代史|11377
旧唐书|29185
明史|85179
晋书|21133
梁书|14318
水经注全|11630
汉书|37622
辽史|9278
金史|13758
陈书|7096
隋书|8204
魏书|28178
**总计**|**967257**
《短篇章和资治通鉴》中各书籍统计如下(此部分数据量不完全准确):
|书名|句数
|:--|:--|
资治通鉴|7.95w
左传|1.09w
大学章句集注| 86
反经| 4211
公孙龙子| 73
管子| 6266
鬼谷子| 385
韩非子| 4325
淮南子| 2669
黄帝内经| 6162
皇帝四经| 243
将苑| 100
金刚经| 193
孔子家语| 138
老子| 398
了凡四训| 31
礼记| 4917
列子| 1735
六韬| 693
六祖坛经| 949
论语| 988
吕氏春秋| 2473
孟子| 1654
梦溪笔谈| 1280
墨子| 2921
千字文| 82
清史稿| 1604
三字经| 234
山海经| 919
伤寒论| 712
商君书| 916
尚书| 1048
世说新语| 3044
司马法| 132
搜神记| 1963
搜神后记| 540
素书| 61
孙膑兵法| 230
孙子兵法| 338
天工开物| 807
尉缭子| 226
文昌孝经| 194
文心雕龙| 1388
吴子| 136
孝经| 102
笑林广记| 1496
荀子| 3131
颜氏家训| 510
仪礼| 2495
易传| 711
逸周书| 1505
战国策| 3318
贞观政要| 1291
中庸| 206
周礼| 2026
周易| 460
庄子| 1698
百战奇略| 800
论衡| 1.19w
智囊|2165
罗织经|188
朱子家训|31
抱朴子|217
地藏经|547
国语|3841
容斋随笔|2921
幼学琼林|1372
三略|268
围炉夜话|387
冰鉴|120
如果您使用该语料库,请注明出处:https://github.com/NiuTrans/Classical-Modern
感谢为该语料库做出贡献的成员:丁佳鹏、杨文权、刘晓晴、曹润柘、罗应峰。
```
```
## Training procedure
在英伟达16G显卡训练了 4 天整,共计68 次。
[文言文数据集](https://huggingface.co/datasets/supermy/Classical-Modern) 训练数据. 模型 [MT5](google/mt5-small)
```
[INFO|trainer.py:1628] 2022-12-15 16:08:36,696 >> ***** Running training *****
[INFO|trainer.py:1629] 2022-12-15 16:08:36,696 >> Num examples = 967255
[INFO|trainer.py:1630] 2022-12-15 16:08:36,697 >> Num Epochs = 6
[INFO|trainer.py:1631] 2022-12-15 16:08:36,697 >> Instantaneous batch size per device = 12
[INFO|trainer.py:1632] 2022-12-15 16:08:36,697 >> Total train batch size (w. parallel, distributed & accumulation) = 12
[INFO|trainer.py:1633] 2022-12-15 16:08:36,697 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1634] 2022-12-15 16:08:36,697 >> Total optimization steps = 483630
[INFO|trainer.py:1654] 2022-12-15 16:08:36,698 >> Continuing training from checkpoint, will skip to saved global_step
[INFO|trainer.py:1655] 2022-12-15 16:08:36,698 >> Continuing training from epoch 5
[INFO|trainer.py:1656] 2022-12-15 16:08:36,698 >> Continuing training from global step 465000
{'loss': 5.2906, 'learning_rate': 1.8743667679837894e-06, 'epoch': 5.78}
{'loss': 5.3196, 'learning_rate': 1.8226743584971985e-06, 'epoch': 5.78}
{'loss': 5.3467, 'learning_rate': 6.513243595310464e-08, 'epoch': 5.99}
{'loss': 5.3363, 'learning_rate': 1.344002646651366e-08, 'epoch': 6.0}
{'train_runtime': 6277.5234, 'train_samples_per_second': 924.494, 'train_steps_per_second': 77.042, 'train_loss': 0.2044413571775476, 'epoch': 6.0}
***** train metrics *****
epoch = 6.0
train_loss = 0.2044
train_runtime = 1:44:37.52
train_samples = 967255
train_samples_per_second = 924.494
train_steps_per_second = 77.042
12/15/2022 17:53:23 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2920] 2022-12-15 17:53:23,729 >> ***** Running Evaluation *****
[INFO|trainer.py:2922] 2022-12-15 17:53:23,729 >> Num examples = 200
[INFO|trainer.py:2925] 2022-12-15 17:53:23,729 >> Batch size = 12
100%|██████████| 17/17 [00:07<00:00, 2.29it/s]
[INFO|modelcard.py:443] 2022-12-15 17:53:32,737 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Translation', 'type': 'translation'}, 'metrics': [{'name': 'Bleu', 'type': 'bleu', 'value': 0.7225}]}
***** eval metrics *****
epoch = 6.0
eval_bleu = 0.7225
eval_gen_len = 12.285
eval_loss = 6.6782
eval_runtime = 0:00:07.77
eval_samples = 200
eval_samples_per_second = 25.721
eval_steps_per_second = 2.186
``` |
Nenma/finetuning-sentiment-model-25k-samples | Nenma | 2022-12-15T11:55:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-15T10:52:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-25k-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93208
- name: F1
type: f1
value: 0.932183081715792
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-25k-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2198
- Accuracy: 0.9321
- F1: 0.9322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
odahl/q-FrozenLake-v1-4x4-noSlippery | odahl | 2022-12-15T11:44:12Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T11:41:11Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="odahl/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zhiyil/roberta-base-finetuned-cola | zhiyil | 2022-12-15T11:40:48Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"optimum_graphcore",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-15T11:11:45Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: roberta-base-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6074
- Matthews Correlation: 0.6221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- total_eval_batch_size: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4536 | 1.0 | 534 | 0.4104 | 0.5738 |
| 0.4876 | 2.0 | 1068 | 0.5156 | 0.5729 |
| 0.1281 | 3.0 | 1602 | 0.5083 | 0.6145 |
| 0.0441 | 4.0 | 2136 | 0.5483 | 0.6119 |
| 0.2985 | 5.0 | 2670 | 0.6074 | 0.6221 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.0
|
tomekkorbak/kind_poitras | tomekkorbak | 2022-12-15T11:27:37Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-12-15T11:27:21Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: kind_poitras
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kind_poitras
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 25000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00078,
'is_split_by_sentences': True,
'skip_tokens': 1661599744},
'generation': {'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'max_tokens': 64, 'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '81a1701e025d2c65ae6e8c2103df559071523ee0'},
'path_or_name': 'tomekkorbak/goofy_pasteur'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kind_poitras',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1661599744,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1u3qemu1 |
Kwaku/social_media_sa_finetuned_2 | Kwaku | 2022-12-15T11:25:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"eng",
"dataset:banking77",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-11T17:12:39Z | ---
language: eng
datasets:
- banking77
---
# Social Media Sentiment Analysis Model (Finetuned 2)
This is an updated fine-tuned version of the [Social Media Sentiment Analysis Model](https://huggingface.co/Kwaku/social_media_sa) which is a finetuned version of [Distilbert](https://huggingface.co/models?other=distilbert). It's best suited for sentiment-analysis.
## Model Description
Social Media Sentiment Analysis Model was trained on the [dataset consisting of tweets](https://www.kaggle.com/code/mohamednabill7/sentiment-analysis-of-twitter-data/data) obtained from Kaggle."
## Intended Uses and Limitations
This model is meant for sentiment-analysis. Because it was trained on a corpus of tweets, it is familiar with social media jargons.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>>from transformers import pipeline
>>> model_name = "Kwaku/social_media_sa_finetuned_2"
>>> generator = pipeline("sentiment-analysis", model=model_name)
>>> result = generator("I like this model")
>>> print(result)
Generated output: [{'label': 'positive', 'score': 0.9992923736572266}]
```
### Limitations and bias
This model inherits the bias of its parent, [Distilbert](https://huggingface.co/models?other=distilbert).
Besides that, it was trained on only 1000 randomly selected sequences, and thus does not achieve a high probability rate.
It does fairly well nonetheless. |
GeneralAwareness/Dth | GeneralAwareness | 2022-12-15T11:05:02Z | 0 | 6 | null | [
"stable-diffusion",
"v2",
"text-to-image",
"image-to-image",
"Embedding",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-image | 2022-12-15T11:01:53Z | ---
license: cc-by-nc-sa-4.0
language:
- en
thumbnail: "https://huggingface.co/GeneralAwareness/Dth/resolve/main/an_ancient_bone_legion_dth.png"
tags:
- stable-diffusion
- v2
- text-to-image
- image-to-image
- Embedding
---
Textual Inversion Embedding by General Awareness For SD 2.x trained on 768x768 images from various sources.
Install by downloading the .pt embedding, and put it in the \embeddings folder
A bones/death/pencil drawing themed embedding that was created with 16 vectors. Using this with my previous Unddep embedding does add colour to water, etc... A good combo to try.
Use keyword: dth
Without this embedding and with this embedding.


Without this embedding and with this embedding.


Without this embedding and with this embedding.


A bone house.

The Bone Queen

The Bone Queen, queen of bones.

An ancient bone legion.

An abandoned house.

A stairway to Heaven

A tall proud tree.

A beautiful rose
 |
ArthurinRUC/ppo-Huggy | ArthurinRUC | 2022-12-15T10:28:31Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T10:28:24Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ArthurinRUC/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mikmascari/evalita-dm-BERT | mikmascari | 2022-12-15T10:21:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-14T16:14:08Z | Model for prerequisite relation inference between wikipedia pages, ITALIAN only, Domain specific for Data mining and coding.
Based on the work of NLP-CIC @ PRELEARN
eval_accuracy: 0.9176470588235294,
eval_f1: 0.8510638297872339,
eval_precision: 0.7692307692307693,
eval_recall: 0.9523809523809523,
Evaluated on EVALITA Prelearn task dataset in-domain DataMining |
metga97/ppo-Huggy | metga97 | 2022-12-15T10:12:42Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T10:12:31Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: metga97/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LucianoDeben/ppo-LunarLander-v2 | LucianoDeben | 2022-12-15T09:57:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T09:56:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.91 +/- 19.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tolerantpancake/NegativeLowResolution | tolerantpancake | 2022-12-15T09:37:43Z | 0 | 30 | null | [
"region:us"
] | null | 2022-12-15T09:36:59Z |
Here is a **negative prompt** embedding I created in the hopes of using embeddings to eliminate low detailed and low fidelity images.**All 400-2400 step versions work very well to increase detail without losing coherency of the subject when used with other embeddings or large prompts. treat the different training step versions as a detail slider, 100-2400** This is an experiment, but the results are already impressive. Toy around with it, it can make some really cool images! An X/Y matrix is also provided here showing the different versions combined with my other negative embedding, "Negative Mutation". |
tolerantpancake/NegativeMutation | tolerantpancake | 2022-12-15T09:36:43Z | 0 | 21 | null | [
"region:us"
] | null | 2022-12-15T09:11:41Z | Here is a **negative prompt** embedding I created in the hopes of using embeddings to eliminate anatomical mutations and disfigurements.** The 400-500 step versions seem to work the best with no positive prompt embeddings**, however you can seemingly use all of them up to 2400 steps with very good results if you are using other embeddings in your positive prompt. This is an experiment, but the results are already impressive. Toy around with it, it can make some really cool images.! |
SRM47/gpt2-paraphraser | SRM47 | 2022-12-15T09:32:21Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-15T08:15:12Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-paraphraser
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-paraphraser
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
thisisac/ppo-LunarLander | thisisac | 2022-12-15T09:25:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T09:24:40Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.06 +/- 17.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
botryan96/GeoBERT_analyzer | botryan96 | 2022-12-15T09:23:22Z | 5 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-21T19:09:38Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: GeoBERT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GeoBERT_Analyzer
GeoBERT_Analyzer is a Text Classification model that was fine-tuned from GeoBERT on the Geoscientific Corpus dataset.
The model was trained on the Labeled Geoscientific & Non-Geosceintific Corpus dataset (21416 x 2 sentences).
## Intended uses
The train aims to make the Language Model have the ability to distinguish between Geoscience and Non – Geoscience (General) corpus
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 14000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
## Model performances (metric: seqeval)
entity|precision|recall|f1
-|-|-|-
General |0.9976|0.9980|0.9978
Geoscience|0.9980|0.9984|0.9982
## How to use GeoBERT with HuggingFace
##### Load GeoBERT and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("botryan96/GeoBERT_analyzer")
model = AutoModelForTokenClassification.from_pretrained("botryan96/GeoBERT_analyzer")
#Define the pipeline
from transformers import pipeline
anlyze_machine=pipeline('text-classification',model = model_checkpoint2)
#Define the sentences
sentences = ['the average iron and sulfate concentrations were calculated to be 19 . 6 5 . 2 and 426 182 mg / l , respectively .',
'She first gained media attention as a friend and stylist of Paris Hilton']
#Deploy the machine
anlyze_machine(sentences)
``` |
botryan96/GeoBERT | botryan96 | 2022-12-15T09:20:54Z | 224 | 6 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-08T09:18:19Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: GeoBERT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GeoBERT
GeoBERT is a NER model that was fine-tuned from SciBERT on the Geoscientific Corpus dataset.
The model was trained on the Labeled Geoscientific Corpus dataset (~1 million sentences).
## Intended uses
The NER product in this model has a goal to identify four main semantic types or categories related to Geosciences.
1. GeoPetro for any entities that belong to all terms in Geosciences
2. GeoMeth for all tools or methods associated with Geosciences
3. GeoLoc to identify geological locations
4. GeoTime for identifying the geological time scale entities
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 14000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
## Model performances (metric: seqeval)
entity|precision|recall|f1
-|-|-|-
GeoLoc |0.9727|0.9591|0.9658
GeoMeth |0.9433|0.9447|0.9445
GeoPetro|0.9767|0.9745|0.9756
GeoTime |0.9695|0.9666|0.9680
## How to use GeoBERT with HuggingFace
##### Load GeoBERT and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("botryan96/GeoBERT")
model = AutoModelForTokenClassification.from_pretrained("botryan96/GeoBERT")
#Define the pipeline
from transformers import pipeline
ner_machine = pipeline('ner',model = models,tokenizer=tokenizer, aggregation_strategy="simple")
#Define the sentence
sentence = 'In North America, the water storage in the seepage face model is higher than the base case because positive pore pressure is requisite for drainage through a seepage face boundary condition. The result from the resistivity data supports the notion, especially in the northern part of the Sandstone Sediment formation. The active formation of America has a big potential for Oil and Gas based on the seismic section, has been activated since the Paleozoic'
#Deploy the NER Machine
ner_machine(sentence)
``` |
Hoax0930/BBC | Hoax0930 | 2022-12-15T09:07:23Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-12-15T07:31:53Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: BBC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BBC
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2824
- Rouge1: 18.46
- Rouge2: 17.0488
- Rougel: 18.3552
- Rougelsum: 18.3466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.7104 | 1.0 | 445 | 0.3218 | 17.4866 | 15.2567 | 17.0429 | 17.1216 |
| 0.3433 | 2.0 | 890 | 0.3039 | 17.7632 | 15.8878 | 17.4551 | 17.5161 |
| 0.3116 | 3.0 | 1335 | 0.2912 | 18.175 | 16.4391 | 17.9597 | 18.0081 |
| 0.2908 | 4.0 | 1780 | 0.2869 | 18.2832 | 16.6726 | 18.1187 | 18.1205 |
| 0.273 | 5.0 | 2225 | 0.2829 | 18.2807 | 16.7359 | 18.1496 | 18.1621 |
| 0.2625 | 6.0 | 2670 | 0.2819 | 18.3845 | 16.8793 | 18.2622 | 18.2561 |
| 0.2482 | 7.0 | 3115 | 0.2801 | 18.4748 | 17.0796 | 18.3792 | 18.3672 |
| 0.2454 | 8.0 | 3560 | 0.2824 | 18.46 | 17.0488 | 18.3552 | 18.3466 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
nidek/q-Taxi-v3 | nidek | 2022-12-15T08:30:48Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T08:30:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="nidek/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jakeyoo/whisper-small-ja | jakeyoo | 2022-12-15T08:25:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ja",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-15T00:39:44Z | ---
language:
- ja
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Japanese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 ja
type: mozilla-foundation/common_voice_11_0
config: ja
split: test
args: ja
metrics:
- name: Wer
type: wer
value: 68.94594978011065
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Japanese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 ja dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3617
- Wer: 68.9459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1938 | 1.09 | 1000 | 0.2841 | 74.6631 |
| 0.0466 | 3.06 | 2000 | 0.2996 | 72.0953 |
| 0.005 | 5.04 | 3000 | 0.3376 | 70.4355 |
| 0.0021 | 7.01 | 4000 | 0.3617 | 68.9459 |
| 0.002 | 8.1 | 5000 | 0.3735 | 71.4711 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
wyu1/GenRead-3B-NQ-MergeDPR | wyu1 | 2022-12-15T08:23:08Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"license:cc-by-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-14T08:49:05Z | ---
license: cc-by-4.0
---
# GenRead (MergeDPR): FiD model trained on NQ
-- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the NQ dataset [1].
-- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 3e-5; best dev at 19000 steps.
References:
[1] Natural Questions: A Benchmark for Question Answering Research. TACL 2019.
[2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022
## Model performance
We evaluate it on the TriviaQA dataset, the EM score is 54.38.
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
|
sd-concepts-library/ahx-model-2 | sd-concepts-library | 2022-12-15T08:07:36Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2022-11-05T05:21:10Z | ---
license: mit
---
# Stable Diffusion Artist Collaboration → Model 2
This is the `<model-2>` concept taught to stable diffusion via textual inversion training. Anyone is free to load this concept into the [stable conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). More information on this collaboration and selected output images can be found here → https://www.astronaut.horse/model-2
Below are the original artworks used as input images.
<img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/2.jpeg">
<img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/4.jpeg">
<img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/0.jpeg">
<img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/1.jpeg">
<img style="width: 100%; max-width: 500px;" src="https://huggingface.co/sd-concepts-library/enk-resin-frames/resolve/main/concept_images/3.jpeg">
|
wyu1/GenRead-3B-WebQ-MergeDPR | wyu1 | 2022-12-15T06:55:20Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"license:cc-by-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-14T09:06:00Z | ---
license: cc-by-4.0
---
# GenRead (MergeDPR): FiD model trained on WebQ
-- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the WebQ dataset [1].
-- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 5e-5; best dev at 18000 steps.
References:
[1] Semantic parsing on freebase from question-answer pairs. EMNLP 2013.
[2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022
## Model performance
We evaluate it on the WebQ dataset, the EM score is 56.25.
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
|
J3/PPO-LunarLander | J3 | 2022-12-15T06:18:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T05:56:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.45 +/- 14.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jngai/Known-nothing | Jngai | 2022-12-15T06:18:06Z | 0 | 0 | null | [
"region:us"
] | null | 2022-12-15T06:16:54Z | git lfs install
git clone https://huggingface.co/Jngai/Known-nothing |
kejian/curious-filtering | kejian | 2022-12-15T06:14:55Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-14T01:28:47Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: curious-filtering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# curious-filtering
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'filter_threshold': 0.002361,
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'batch_size': 128,
'every_n_steps': 256,
'force_call_on': [12588],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 256,
'force_call_on': [12588],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': 'e4beb805f2816a9750abbbfe6947ba0c8e7b2d79'},
'path_or_name': 'kejian/mighty-filtering'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'curious-filtering',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 12588,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/3se8ueja |
ArthurinRUC/ppo-LunarLander-v2 | ArthurinRUC | 2022-12-15T05:23:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T05:22:42Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.34 +/- 23.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
parambharat/whisper-tiny-ta-example | parambharat | 2022-12-15T05:22:33Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ta",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-15T04:59:05Z | ---
language:
- ta
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper tiny Ta example - Bharat Ramanathan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny Ta example - Bharat Ramanathan
This model is a fine-tuned version of [parambharat/whisper-tiny-ta](https://huggingface.co/parambharat/whisper-tiny-ta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4016
- Wer: 36.5217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.304 | 12.01 | 25 | 0.3614 | 31.7391 |
| 0.1826 | 24.02 | 50 | 0.3851 | 35.2174 |
| 0.1346 | 37.01 | 75 | 0.3999 | 37.8261 |
| 0.1096 | 49.02 | 100 | 0.4016 | 36.5217 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
cleanrl/MountainCar-v0-dqn_jax-seed1 | cleanrl | 2022-12-15T04:49:14Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T04:49:11Z | ---
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
metrics:
- type: mean_reward
value: -164.60 +/- 12.87
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **MountainCar-v0**
This is a trained model of a DQN agent playing MountainCar-v0.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_jax.py).
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/MountainCar-v0-dqn_jax-seed1/raw/main/dqn.py
curl -OL https://huggingface.co/cleanrl/MountainCar-v0-dqn_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/MountainCar-v0-dqn_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn_jax.py --track --capture-video --save-model --upload-model --hf-entity cleanrl --env-id MountainCar-v0 --seed 1
```
# Hyperparameters
```python
{'batch_size': 128,
'buffer_size': 10000,
'capture_video': True,
'end_e': 0.05,
'env_id': 'MountainCar-v0',
'exp_name': 'dqn_jax',
'exploration_fraction': 0.5,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'learning_starts': 10000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 500,
'total_timesteps': 500000,
'track': True,
'train_frequency': 10,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
DaydreamerF/bert-finetuned-ViolentSmallFarmers-10 | DaydreamerF | 2022-12-15T04:46:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"zh",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-15T02:30:06Z | ---
language: zh
widget:
- text: "这句话是谁说的?"
context: "“老大,你太牛逼了,把敌人军火库都给炸了,我真的佩服的五体投地,我现在忍不住想看看你藏的东西在哪里,我们快点出发吧。”代号零听完郭旭刚刚的讲述笑的拍手一直叫好。"
- text: "这句话是谁说的?"
context: "“妈,你别哭了,我这不是好着呢吗?”郭旭扶着母亲的肩膀笑着说。"
- text: "这句话是谁说的?"
context: "“总统先生,看来我们这一次在劫难逃了,大乘期的恐怖,远远超出了我们的想象,我还有一些后手能尽量拖延他一点时间,你们先走,我让我的鬼奴随你们去,去这个地方或许能保你们一线生机!”郭旭说完便偷偷地将黑暗空间的阴阳珠交给了陈天。"
- text: "这句话是谁说的?"
context: "“也罢,能活一个是一个吧!他还那么年轻?”却是剑傲天摇了摇头无奈的说道。"
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ViolentSmallFarmers-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ViolentSmallFarmers-10
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
-exact_match:93.99759903961585
-f1:93.99759903961585
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
cleanrl/Acrobot-v1-dqn-seed1 | cleanrl | 2022-12-15T04:40:03Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Acrobot-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T04:40:01Z | ---
tags:
- Acrobot-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Acrobot-v1
type: Acrobot-v1
metrics:
- type: mean_reward
value: -96.80 +/- 21.87
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Acrobot-v1**
This is a trained model of a DQN agent playing Acrobot-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn.py).
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Acrobot-v1-dqn-seed1/raw/main/dqn.py
curl -OL https://huggingface.co/cleanrl/Acrobot-v1-dqn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Acrobot-v1-dqn-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn.py --cuda False --track --capture-video --save-model --upload-model --hf-entity cleanrl --env-id Acrobot-v1 --seed 1
```
# Hyperparameters
```python
{'batch_size': 128,
'buffer_size': 10000,
'capture_video': True,
'cuda': False,
'end_e': 0.05,
'env_id': 'Acrobot-v1',
'exp_name': 'dqn',
'exploration_fraction': 0.5,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'learning_starts': 10000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 500,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 10,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
xianbao/distilbert-base-uncased-finetuned-imdb | xianbao | 2022-12-15T04:13:29Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-15T04:01:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huggingtweets/mattbergwall | huggingtweets | 2022-12-15T04:12:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-15T04:11:31Z | ---
language: en
thumbnail: http://www.huggingtweets.com/mattbergwall/1671077570136/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1582077511449690142/-6RJk8SE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matt Bergwall</div>
<div style="text-align: center; font-size: 14px;">@mattbergwall</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matt Bergwall.
| Data | Matt Bergwall |
| --- | --- |
| Tweets downloaded | 368 |
| Retweets | 136 |
| Short tweets | 67 |
| Tweets kept | 165 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g1id4cd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mattbergwall's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/368nfiv3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/368nfiv3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mattbergwall')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sipheiroce/ppo-Huggy | sipheiroce | 2022-12-15T03:57:19Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T03:57:11Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: sipheiroce/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kurianbenoy/whisper_malayalam_largev2 | kurianbenoy | 2022-12-15T03:31:12Z | 88 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"hf-asr-leaderboard",
"generated_from_trainer",
"ml",
"dataset:mozilla-foundation/common_voice_11_0",
"doi:10.57967/hf/0192",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-11T13:40:10Z | ---
language:
- ml
license: apache-2.0
tags:
- whisper-event
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper_malayalam_largev2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: ml
split: test
metrics:
- name: Wer
type: wer
value: 41.69859514687101
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_malayalam_largev2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2913
- Wer: 41.6986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0769 | 2.07 | 1000 | 0.3657 | 53.8314 |
| 0.0089 | 4.14 | 2000 | 0.2913 | 41.6986 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
SatCat/q-Taxi-v3 | SatCat | 2022-12-15T03:30:22Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T03:30:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SatCat/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wanxiangche/ppo-LunarLander-v2 | wanxiangche | 2022-12-15T03:01:12Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T03:00:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.45 +/- 15.84
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PrimeQA/nq_tydi-reader-xlmr_large-20221210 | PrimeQA | 2022-12-15T01:29:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"MRC",
"TyDiQA",
"Natural Questions",
"xlm-roberta-large",
"multilingual",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-12-15T01:06:26Z | ---
license: apache-2.0
tags:
- MRC
- TyDiQA
- Natural Questions
- xlm-roberta-large
language:
- multilingual
---
*Task*: MRC
# Model description
An XLM-RoBERTa Large reading comprehension model trained from the combination of TyDi and NQ datasets, starting from a fine-tuned [Tydi xlm-roberta-large](https://huggingface.co/PrimeQA/tydiqa-primary-task-xlm-roberta-large) model.
## Intended uses & limitations
You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model.
## Usage
You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [squad.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/squad.ipynb).
### BibTeX entry and citation info
```bibtex
@article{kwiatkowski-etal-2019-natural,
title = "Natural Questions: A Benchmark for Question Answering Research",
author = "Kwiatkowski, Tom and
Palomaki, Jennimaria and
Redfield, Olivia and
Collins, Michael and
Parikh, Ankur and
Alberti, Chris and
Epstein, Danielle and
Polosukhin, Illia and
Devlin, Jacob and
Lee, Kenton and
Toutanova, Kristina and
Jones, Llion and
Kelcey, Matthew and
Chang, Ming-Wei and
Dai, Andrew M. and
Uszkoreit, Jakob and
Le, Quoc and
Petrov, Slav",
journal = "Transactions of the Association for Computational Linguistics",
volume = "7",
year = "2019",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q19-1026",
doi = "10.1162/tacl_a_00276",
pages = "452--466",
}
```
```bibtex
@article{clark-etal-2020-tydi,
title = "{T}y{D}i {QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages",
author = "Clark, Jonathan H. and
Choi, Eunsol and
Collins, Michael and
Garrette, Dan and
Kwiatkowski, Tom and
Nikolaev, Vitaly and
Palomaki, Jennimaria",
journal = "Transactions of the Association for Computational Linguistics",
volume = "8",
year = "2020",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2020.tacl-1.30",
doi = "10.1162/tacl_a_00317",
pages = "454--470",
}
```
|
SatCat/q-FrozenLake-v1-4x4-noSlippery | SatCat | 2022-12-15T01:21:30Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T01:21:18Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SatCat/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
research-backup/relbert-roberta-large-semeval2012-v6-mask-prompt-c-nce-0 | research-backup | 2022-12-15T00:54:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-11-26T16:14:12Z | ---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-large-semeval2012-v6-mask-prompt-c-nce-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.31890873015873017
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.23529411764705882
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.2314540059347181
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.26903835464146747
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.274
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.2982456140350877
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.26157407407407407
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5710411330420371
- name: F1 (macro)
type: f1_macro
value: 0.4718342368775303
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7190140845070424
- name: F1 (macro)
type: f1_macro
value: 0.17486705290574928
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.42524377031419286
- name: F1 (macro)
type: f1_macro
value: 0.26916204585090897
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7395145023301106
- name: F1 (macro)
type: f1_macro
value: 0.5292369713379188
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6367909746161078
- name: F1 (macro)
type: f1_macro
value: 0.5562730234363212
---
# relbert/relbert-roberta-large-semeval2012-v6-mask-prompt-c-nce-0
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-v6-mask-prompt-c-nce-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.23529411764705882
- Accuracy on SAT: 0.2314540059347181
- Accuracy on BATS: 0.26903835464146747
- Accuracy on U2: 0.2982456140350877
- Accuracy on U4: 0.26157407407407407
- Accuracy on Google: 0.274
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-v6-mask-prompt-c-nce-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.5710411330420371
- Micro F1 score on CogALexV: 0.7190140845070424
- Micro F1 score on EVALution: 0.42524377031419286
- Micro F1 score on K&H+N: 0.7395145023301106
- Micro F1 score on ROOT09: 0.6367909746161078
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-v6-mask-prompt-c-nce-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.31890873015873017
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-large-semeval2012-v6-mask-prompt-c-nce-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 11
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 640
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-v6-mask-prompt-c-nce-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
DrishtiSharma/whisper-large-v2-punjabi-700-steps | DrishtiSharma | 2022-12-15T00:31:33Z | 149 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"pa",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-14T23:34:16Z | ---
language:
- pa
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large Punjabi - Drishti Sharma
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: pa-IN
split: test
args: pa-IN
metrics:
- name: Wer
type: wer
value: 24.476386036960985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Punjabi - Drishti Sharma
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2211
- Wer: 24.4764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 700
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0584 | 5.79 | 700 | 0.2211 | 24.4764 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
DAL12/ppo-LunarLander-v2 | DAL12 | 2022-12-14T23:00:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T22:59:45Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 237.11 +/- 69.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nudro/ddpm-celebahq-finetuned-butterflies-2epochs | nudro | 2022-12-14T22:55:50Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-14T22:55:41Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('nudro/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
myklicious/q-Taxi-v3 | myklicious | 2022-12-14T22:55:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T22:10:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="myklicious/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
plegg/q-Taxi-v3-train2000000 | plegg | 2022-12-14T22:39:53Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T22:39:42Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-train2000000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="plegg/q-Taxi-v3-train2000000", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dim/persona_bot_2_28akcwik | dim | 2022-12-14T22:20:54Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-14T21:21:23Z | ---
license: cc
---
- [wandb metrics](https://wandb.ai/dimweb/persona_bot_2/runs/28akcwik?workspace=user-dimweb) |
myklicious/q-FrozenLake-v1-4x4-noSlippery | myklicious | 2022-12-14T21:59:19Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T21:59:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="myklicious/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Segamboam/ppo-LunarLander-v2 | Segamboam | 2022-12-14T21:42:32Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-12T23:40:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.54 +/- 20.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kinkpunk/q-Taxi-v3-delivery | kinkpunk | 2022-12-14T21:37:58Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T21:34:18Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-delivery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kinkpunk/q-Taxi-v3-delivery", filename="q-learning.pkl")
# Don't forget to create environment
env = gym.make("Taxi-v3")
```
|
sam133/ppo-LunarLander-v2 | sam133 | 2022-12-14T20:56:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T20:39:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.90 +/- 21.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tzvc/2cabda5b-4e53-40e9-8fcf-cdba5ea5bd6c | tzvc | 2022-12-14T20:55:14Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-14T20:37:09Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: a portrait of [V]
---
### training params
```json
{
"pretrained_model_name_or_path": "runwayml/stable-diffusion-v1-5",
"instance_data_dir": "./2cabda5b-4e53-40e9-8fcf-cdba5ea5bd6c/instance_data",
"class_data_dir": "./class_data/a-portrait-of-a-person",
"output_dir": "./2cabda5b-4e53-40e9-8fcf-cdba5ea5bd6c/",
"train_text_encoder": true,
"with_prior_preservation": false,
"prior_loss_weight": 1.0,
"instance_prompt": "a portrait of [V]",
"class_prompt": "a portrait of a person",
"resolution": 512,
"train_batch_size": 1,
"gradient_accumulation_steps": 1,
"gradient_checkpointing": true,
"use_8bit_adam": true,
"learning_rate": 5e-06,
"lr_scheduler": "constant",
"lr_warmup_steps": 0,
"num_class_images": 200,
"max_train_steps": 1050,
"mixed_precision": "fp16"
}
```
|
BartMartin/ddpm-butterflies-128 | BartMartin | 2022-12-14T20:48:15Z | 1 | 0 | diffusers | [
"diffusers",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"region:us"
] | null | 2022-12-14T20:42:46Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/BartMartin/ddpm-butterflies-128/tensorboard?#scalars)
|
cxyz/prynty | cxyz | 2022-12-14T20:36:38Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-14T20:31:39Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### prynty Dreambooth model trained by cxyz with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
IntroToProgramming/model | IntroToProgramming | 2022-12-14T20:36:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-14T18:19:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4554
- Accuracy: 0.6971
- F1: 0.7012
- Precision: 0.7069
- Recall: 0.6971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kinkpunk/q-FrozenLake-v1-custom-map-Slippery-edition | kinkpunk | 2022-12-14T20:33:27Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T20:24:37Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-custom-map-Slippery-edition
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.89 +/- 0.31
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kinkpunk/q-FrozenLake-v1-custom-map-Slippery-edition",
filename="q-learning.pkl")
# Don't forget to change additional attributes
# when you create environment using 4x4 map
env = gym.make('FrozenLake-v1',
desc=["SFFF", "FHHF", "FFHF", "HFFG"],
is_slippery=True)
```
## Training parameters
```python
# Training parameters
n_training_episodes = 105000 # Total training episodes
learning_rate = 0.8 # Learning rate
# Evaluation parameters
n_eval_episodes = 100 # Total number of test episodes
# Environment parameters
env_id = "FrozenLake-v1" # Name of the environment
max_steps = 99 # Max steps per episode
gamma = 0.98 # Discounting rate
eval_seed = [] # The evaluation seed of the environment
# Exploration parameters
max_epsilon = 0.99 # Exploration probability at start
min_epsilon = 0.02 # Minimum exploration probability
decay_rate = 0.009 # Exponential decay rate for exploration prob
``` |
jordanblatter/distilbert-base-uncased-finetuned-imdb | jordanblatter | 2022-12-14T20:30:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-14T03:28:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3923 | 1.0 | 2 | 2.1317 |
| 2.1662 | 2.0 | 4 | 1.8450 |
| 1.9889 | 3.0 | 6 | 1.8227 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AshtonIsNotHere/xlm-roberta-long-base-4096 | AshtonIsNotHere | 2022-12-14T20:10:15Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"longformer",
"fill-mask",
"xlmr",
"XLM-RoBERTa",
"multilingual",
"dataset:wikitext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-27T21:41:23Z | ---
tags:
- longformer
- xlmr
- XLM-RoBERTa
language: multilingual
license: apache-2.0
datasets:
- wikitext
---
## XLM-R Longformer Model
This is an XLM-RoBERTa longformer model that was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus.
This model is identical to [markussagen's xlm-r longformer model,](https://huggingface.co/markussagen/xlm-roberta-longformer-base-4096) the difference being that the weights have been transferred to a Longformer model, in order to enable loading with ```AutoModel.from_pretrained()``` without external dependencies.
## Memory Requirements
Note that this model requires a considerable amount of memory to run. The heatmap below should give a relative idea of the amount of memory needed at inference for a target batch and sequence length. N.B. data for this plot was generated by running on a single a100 GPU with 40gb of memory.
<details>
<summary>View Inference Memory Plot</summary>

</details>
## How to Use
The model can be used as expected to fine-tune on a downstream task.
For instance for QA.
```python
import torch
from transformers import AutoModel, AutoTokenizer
MAX_SEQUENCE_LENGTH = 4096
MODEL_NAME_OR_PATH = "AshtonIsNotHere/xlm-roberta-long-base-4096"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
max_length=MAX_SEQUENCE_LENGTH,
padding="max_length",
truncation=True,
)
model = AutoModelForQuestionAnswering.from_pretrained(
MODEL_NAME_OR_PATH,
max_length=MAX_SEQUENCE_LENGTH,
)
``` |
DrishtiSharma/whisper-large-v2-assamese-700-steps | DrishtiSharma | 2022-12-14T19:37:42Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-14T18:32:50Z | ---
language:
- hi
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large Assamese - Drishti Sharma
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: as
split: test
args: as
metrics:
- name: Wer
type: wer
value: 21.45822053780906
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Assamese - Drishti Sharma
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2452
- Wer: 21.4582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 700
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0109 | 4.32 | 700 | 0.2452 | 21.4582 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
mbertheau/hf-drl-course-2-q-FrozenLake-v1-4x4-no_slippery_1 | mbertheau | 2022-12-14T19:22:06Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T19:20:02Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: hf-drl-course-2-q-FrozenLake-v1-4x4-no_slippery_1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mbertheau/hf-drl-course-2-q-FrozenLake-v1-4x4-no_slippery_1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
skandavivek2/roberta-finetuned-subjqa-movies_2 | skandavivek2 | 2022-12-14T19:15:55Z | 21 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-14T18:50:53Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
anuragshas/whisper-large-v2-mr | anuragshas | 2022-12-14T19:05:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"mr",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-14T12:24:19Z | ---
language:
- mr
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large-v2 Marathi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 mr
type: mozilla-foundation/common_voice_11_0
config: mr
split: test
args: mr
metrics:
- name: Wer
type: wer
value: 15.220630647952316
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-v2 Marathi
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 mr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3108
- Wer: 15.2206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1931 | 3.04 | 200 | 0.2491 | 16.9270 |
| 0.1108 | 7.03 | 400 | 0.2379 | 15.2711 |
| 0.0548 | 11.02 | 600 | 0.2668 | 15.3120 |
| 0.0189 | 15.01 | 800 | 0.3108 | 15.2206 |
| 0.0078 | 18.05 | 1000 | 0.3499 | 15.5571 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Casti11/ppo-LunarLander-v2 | Casti11 | 2022-12-14T19:03:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T17:40:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.00 +/- 18.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
msgerasyov/ppo-Huggy | msgerasyov | 2022-12-14T18:59:56Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-14T18:59:50Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: msgerasyov/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
marik0/q-FrozenLake-v1-4x4-Slippery | marik0 | 2022-12-14T18:45:55Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T17:53:45Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- metrics:
- type: mean_reward
value: 0.72 +/- 0.45
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="marik0/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sgangireddy/whisper-medium-fleurs-tr | sgangireddy | 2022-12-14T18:39:10Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-14T11:28:21Z | ---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper medium Turkish FLEURS
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs tr_tr
type: google/fleurs
config: tr_tr
split: test
args: tr_tr
metrics:
- name: Wer
type: wer
value: 11.01835138387485
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper medium Turkish FLEURS
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the google/fleurs tr_tr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2581
- Wer: 11.0184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0003 | 90.01 | 1000 | 0.2581 | 11.0184 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
apzl/whisper-small-ml | apzl | 2022-12-14T18:32:23Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"common-voice",
"ml",
"dataset:common-voice-11",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-10T07:41:55Z | ---
language:
- ml
tags:
- whisper-event
- common-voice
license: "apache-2.0"
datasets:
- common-voice-11
metrics:
- wer = 37.2
- loss = 0.67
--- |
albertqueralto/ppo-Huggy | albertqueralto | 2022-12-14T18:30:11Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-14T17:53:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: albertqueralto/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
micole66/autotrain-feet-typelok-2473576411 | micole66 | 2022-12-14T18:19:38Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:micole66/autotrain-data-feet-typelok",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-14T18:18:58Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- micole66/autotrain-data-feet-typelok
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.5534616165654265
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2473576411
- CO2 Emissions (in grams): 0.5535
## Validation Metrics
- Loss: 0.279
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000 |
ubiest/ppo-LunarLanderr-v2 | ubiest | 2022-12-14T18:09:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T17:01:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.06 +/- 17.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kul-speech-lab/espnet2_asr_conformer_cgn | kul-speech-lab | 2022-12-14T18:08:39Z | 3 | 0 | null | [
"region:us"
] | null | 2022-12-14T17:46:43Z | Pre-tained ESPnet2 ASR model
Model: hybrid CTC/attention, 12 enc conformer, 6 dec transformer, fbank+pitch input features
Data: trained on CGN all components, VL only
Results: cgn-dev 10.75% WER
ESPnet version: 0.10.5a1 |
sgoodfriend/q-Taxi-v3 | sgoodfriend | 2022-12-14T18:02:49Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T17:35:06Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sgoodfriend/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tomekkorbak/loving_noyce | tomekkorbak | 2022-12-14T17:56:39Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-12-14T17:56:31Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: loving_noyce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loving_noyce
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 25000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0.00056},
'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True,
'skip_tokens': 1661599744},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048,
'prefix': '<|aligned|>'},
{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '81a1701e025d2c65ae6e8c2103df559071523ee0'},
'num_additional_tokens': 2,
'path_or_name': 'tomekkorbak/goofy_pasteur'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'loving_noyce',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1661599744,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/51j97cgd |
nidek/q-FrozenLake-v1-8x8-Slippery | nidek | 2022-12-14T17:51:27Z | 0 | 0 | null | [
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T17:51:22Z | ---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.61 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nidek/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tomekkorbak/dreamy_poitras | tomekkorbak | 2022-12-14T17:51:04Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/pii-pile-chunk3-0-50000",
"dataset:tomekkorbak/pii-pile-chunk3-50000-100000",
"dataset:tomekkorbak/pii-pile-chunk3-100000-150000",
"dataset:tomekkorbak/pii-pile-chunk3-150000-200000",
"dataset:tomekkorbak/pii-pile-chunk3-200000-250000",
"dataset:tomekkorbak/pii-pile-chunk3-250000-300000",
"dataset:tomekkorbak/pii-pile-chunk3-300000-350000",
"dataset:tomekkorbak/pii-pile-chunk3-350000-400000",
"dataset:tomekkorbak/pii-pile-chunk3-400000-450000",
"dataset:tomekkorbak/pii-pile-chunk3-450000-500000",
"dataset:tomekkorbak/pii-pile-chunk3-500000-550000",
"dataset:tomekkorbak/pii-pile-chunk3-550000-600000",
"dataset:tomekkorbak/pii-pile-chunk3-600000-650000",
"dataset:tomekkorbak/pii-pile-chunk3-650000-700000",
"dataset:tomekkorbak/pii-pile-chunk3-700000-750000",
"dataset:tomekkorbak/pii-pile-chunk3-750000-800000",
"dataset:tomekkorbak/pii-pile-chunk3-800000-850000",
"dataset:tomekkorbak/pii-pile-chunk3-850000-900000",
"dataset:tomekkorbak/pii-pile-chunk3-900000-950000",
"dataset:tomekkorbak/pii-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/pii-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/pii-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/pii-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/pii-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/pii-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/pii-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/pii-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/pii-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/pii-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/pii-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/pii-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/pii-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/pii-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/pii-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/pii-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/pii-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/pii-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/pii-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/pii-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-12-14T17:50:57Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/pii-pile-chunk3-0-50000
- tomekkorbak/pii-pile-chunk3-50000-100000
- tomekkorbak/pii-pile-chunk3-100000-150000
- tomekkorbak/pii-pile-chunk3-150000-200000
- tomekkorbak/pii-pile-chunk3-200000-250000
- tomekkorbak/pii-pile-chunk3-250000-300000
- tomekkorbak/pii-pile-chunk3-300000-350000
- tomekkorbak/pii-pile-chunk3-350000-400000
- tomekkorbak/pii-pile-chunk3-400000-450000
- tomekkorbak/pii-pile-chunk3-450000-500000
- tomekkorbak/pii-pile-chunk3-500000-550000
- tomekkorbak/pii-pile-chunk3-550000-600000
- tomekkorbak/pii-pile-chunk3-600000-650000
- tomekkorbak/pii-pile-chunk3-650000-700000
- tomekkorbak/pii-pile-chunk3-700000-750000
- tomekkorbak/pii-pile-chunk3-750000-800000
- tomekkorbak/pii-pile-chunk3-800000-850000
- tomekkorbak/pii-pile-chunk3-850000-900000
- tomekkorbak/pii-pile-chunk3-900000-950000
- tomekkorbak/pii-pile-chunk3-950000-1000000
- tomekkorbak/pii-pile-chunk3-1000000-1050000
- tomekkorbak/pii-pile-chunk3-1050000-1100000
- tomekkorbak/pii-pile-chunk3-1100000-1150000
- tomekkorbak/pii-pile-chunk3-1150000-1200000
- tomekkorbak/pii-pile-chunk3-1200000-1250000
- tomekkorbak/pii-pile-chunk3-1250000-1300000
- tomekkorbak/pii-pile-chunk3-1300000-1350000
- tomekkorbak/pii-pile-chunk3-1350000-1400000
- tomekkorbak/pii-pile-chunk3-1400000-1450000
- tomekkorbak/pii-pile-chunk3-1450000-1500000
- tomekkorbak/pii-pile-chunk3-1500000-1550000
- tomekkorbak/pii-pile-chunk3-1550000-1600000
- tomekkorbak/pii-pile-chunk3-1600000-1650000
- tomekkorbak/pii-pile-chunk3-1650000-1700000
- tomekkorbak/pii-pile-chunk3-1700000-1750000
- tomekkorbak/pii-pile-chunk3-1750000-1800000
- tomekkorbak/pii-pile-chunk3-1800000-1850000
- tomekkorbak/pii-pile-chunk3-1850000-1900000
- tomekkorbak/pii-pile-chunk3-1900000-1950000
model-index:
- name: dreamy_poitras
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dreamy_poitras
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0.0},
'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048,
'prefix': '<|aligned|>'}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'},
'num_additional_tokens': 2,
'path_or_name': 'tomekkorbak/nervous_wozniak'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'dreamy_poitras',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1b5oov3g |
tomekkorbak/romantic_bose | tomekkorbak | 2022-12-14T17:51:03Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/pii-pile-chunk3-0-50000",
"dataset:tomekkorbak/pii-pile-chunk3-50000-100000",
"dataset:tomekkorbak/pii-pile-chunk3-100000-150000",
"dataset:tomekkorbak/pii-pile-chunk3-150000-200000",
"dataset:tomekkorbak/pii-pile-chunk3-200000-250000",
"dataset:tomekkorbak/pii-pile-chunk3-250000-300000",
"dataset:tomekkorbak/pii-pile-chunk3-300000-350000",
"dataset:tomekkorbak/pii-pile-chunk3-350000-400000",
"dataset:tomekkorbak/pii-pile-chunk3-400000-450000",
"dataset:tomekkorbak/pii-pile-chunk3-450000-500000",
"dataset:tomekkorbak/pii-pile-chunk3-500000-550000",
"dataset:tomekkorbak/pii-pile-chunk3-550000-600000",
"dataset:tomekkorbak/pii-pile-chunk3-600000-650000",
"dataset:tomekkorbak/pii-pile-chunk3-650000-700000",
"dataset:tomekkorbak/pii-pile-chunk3-700000-750000",
"dataset:tomekkorbak/pii-pile-chunk3-750000-800000",
"dataset:tomekkorbak/pii-pile-chunk3-800000-850000",
"dataset:tomekkorbak/pii-pile-chunk3-850000-900000",
"dataset:tomekkorbak/pii-pile-chunk3-900000-950000",
"dataset:tomekkorbak/pii-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/pii-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/pii-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/pii-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/pii-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/pii-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/pii-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/pii-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/pii-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/pii-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/pii-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/pii-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/pii-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/pii-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/pii-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/pii-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/pii-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/pii-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/pii-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/pii-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-12-14T17:50:56Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/pii-pile-chunk3-0-50000
- tomekkorbak/pii-pile-chunk3-50000-100000
- tomekkorbak/pii-pile-chunk3-100000-150000
- tomekkorbak/pii-pile-chunk3-150000-200000
- tomekkorbak/pii-pile-chunk3-200000-250000
- tomekkorbak/pii-pile-chunk3-250000-300000
- tomekkorbak/pii-pile-chunk3-300000-350000
- tomekkorbak/pii-pile-chunk3-350000-400000
- tomekkorbak/pii-pile-chunk3-400000-450000
- tomekkorbak/pii-pile-chunk3-450000-500000
- tomekkorbak/pii-pile-chunk3-500000-550000
- tomekkorbak/pii-pile-chunk3-550000-600000
- tomekkorbak/pii-pile-chunk3-600000-650000
- tomekkorbak/pii-pile-chunk3-650000-700000
- tomekkorbak/pii-pile-chunk3-700000-750000
- tomekkorbak/pii-pile-chunk3-750000-800000
- tomekkorbak/pii-pile-chunk3-800000-850000
- tomekkorbak/pii-pile-chunk3-850000-900000
- tomekkorbak/pii-pile-chunk3-900000-950000
- tomekkorbak/pii-pile-chunk3-950000-1000000
- tomekkorbak/pii-pile-chunk3-1000000-1050000
- tomekkorbak/pii-pile-chunk3-1050000-1100000
- tomekkorbak/pii-pile-chunk3-1100000-1150000
- tomekkorbak/pii-pile-chunk3-1150000-1200000
- tomekkorbak/pii-pile-chunk3-1200000-1250000
- tomekkorbak/pii-pile-chunk3-1250000-1300000
- tomekkorbak/pii-pile-chunk3-1300000-1350000
- tomekkorbak/pii-pile-chunk3-1350000-1400000
- tomekkorbak/pii-pile-chunk3-1400000-1450000
- tomekkorbak/pii-pile-chunk3-1450000-1500000
- tomekkorbak/pii-pile-chunk3-1500000-1550000
- tomekkorbak/pii-pile-chunk3-1550000-1600000
- tomekkorbak/pii-pile-chunk3-1600000-1650000
- tomekkorbak/pii-pile-chunk3-1650000-1700000
- tomekkorbak/pii-pile-chunk3-1700000-1750000
- tomekkorbak/pii-pile-chunk3-1750000-1800000
- tomekkorbak/pii-pile-chunk3-1800000-1850000
- tomekkorbak/pii-pile-chunk3-1850000-1900000
- tomekkorbak/pii-pile-chunk3-1900000-1950000
model-index:
- name: romantic_bose
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# romantic_bose
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'},
'path_or_name': 'tomekkorbak/nervous_wozniak'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'romantic_bose',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2aqujp4e |
skandavivek2/roberta-finetuned-subjqa-movies_1 | skandavivek2 | 2022-12-14T17:49:56Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-14T17:35:37Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_1
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
nidek/q-FrozenLake-v1-4x4-Slippery | nidek | 2022-12-14T17:35:46Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T17:35:42Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.77 +/- 0.42
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nidek/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JosephusCheung/ACertainty | JosephusCheung | 2022-12-14T17:29:51Z | 61 | 99 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"doi:10.57967/hf/0200",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-14T15:36:27Z | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
widget:
- text: "masterpiece, best quality, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden"
example_title: "example 1girl"
- text: "masterpiece, best quality, 1boy, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden"
example_title: "example 1boy"
---
# ACertainty
ACertainty is a carefully designed model that is well-suited for further fine-tuning and training for use in dreambooth. It is easier to train than other anime-style Stable Diffusion models, and is less biased and more balanced for further development. This model is less likely to be biased by laion-aesthetic preferences, brought by Stable-Diffusion-v1-4+.
This is not the base of ACertainModel, but you can use this model as your new base to train your new dreambooth model about a couple themes or charactors or styles.
e.g. **_masterpiece, best quality, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden_**
## About online preview with Hosted inference API, also generation with this model
Parameters are not allowed to be modified, as it seems that it is generated with *Clip skip: 1*, for better performance, it is strongly recommended to use *Clip skip: 2* instead.
Here is an example of inference settings, if it is applicable with you on your own server: *Steps: 28, Sampler: Euler a, CFG scale: 11, Clip skip: 2*.
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or FLAX/JAX.
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "JosephusCheung/ACertainty"
branch_name= "main"
pipe = StableDiffusionPipeline.from_pretrained(model_id, revision=branch_name, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "pikachu"
image = pipe(prompt).images[0]
image.save("./pikachu.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Is it a NovelAI based model? What is the relationship with SD1.2 and SD1.4?
See [ASimilarityCalculatior](https://huggingface.co/JosephusCheung/ASimilarityCalculatior) |
tzvc/a9054d36-59d1-4374-ab1f-2ca457b539e2 | tzvc | 2022-12-14T17:22:06Z | 6 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-14T16:49:36Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: a portrait of [V]
---
### training params
```json
{
"pretrained_model_name_or_path": "CompVis/stable-diffusion-v1-4",
"instance_data_dir": "./a9054d36-59d1-4374-ab1f-2ca457b539e2/instance_data",
"class_data_dir": "./class_data/a-portrait-of-a-person",
"output_dir": "./a9054d36-59d1-4374-ab1f-2ca457b539e2/",
"with_prior_preservation": true,
"prior_loss_weight": 1.0,
"instance_prompt": "a portrait of [V]",
"class_prompt": "a portrait of a person",
"resolution": 512,
"train_batch_size": 1,
"gradient_accumulation_steps": 1,
"gradient_checkpointing": true,
"use_8bit_adam": true,
"learning_rate": 5e-06,
"lr_scheduler": "constant",
"lr_warmup_steps": 0,
"num_class_images": 200,
"max_train_steps": 1050,
"mixed_precision": "fp16"
}
```
|
gonend/ppo-LunarLander-v2 | gonend | 2022-12-14T17:20:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T15:57:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.01 +/- 9.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
besa2001/q-FrozenLake-v1-4x4-noSlippery | besa2001 | 2022-12-14T17:16:48Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-14T17:16:42Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="besa2001/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jraramhoej/whisper-small-lt | jraramhoej | 2022-12-14T17:15:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"lt",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-14T08:34:29Z | ---
language:
- lt
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Lt
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: lt
split: test
args: lt
metrics:
- name: Wer
type: wer
value: 35.859768928415455
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Lt
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5724
- Wer: 35.8598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0237 | 6.0 | 1000 | 0.4745 | 37.9839 |
| 0.0016 | 12.01 | 2000 | 0.5128 | 35.9749 |
| 0.0008 | 18.01 | 3000 | 0.5458 | 35.7843 |
| 0.0005 | 24.02 | 4000 | 0.5652 | 35.8240 |
| 0.0004 | 30.02 | 5000 | 0.5724 | 35.8598 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
lakssrini/sd-class-living-room-256 | lakssrini | 2022-12-14T17:12:10Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-14T08:48:13Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute living rooms.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('lakssrini/sd-class-living-room-256')
image = pipeline().images[0]
image
```
|
SebLih/whisper-SV | SebLih | 2022-12-14T17:11:02Z | 5 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-13T14:39:26Z | ---
language:
- sv
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Small SV
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small SV
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Subsets and Splits