modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 18:27:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 18:27:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
epsil/sd-class-butterflies-64 | epsil | 2022-11-29T18:13:23Z | 5 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-29T18:13:12Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(epsil/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
epsil/sd-class-butterflies-32 | epsil | 2022-11-29T17:42:54Z | 6 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-29T17:42:32Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(epsil/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
tomekkorbak/clever_goodall | tomekkorbak | 2022-11-29T17:20:58Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2022-11-29T03:29:26Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: clever_goodall
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clever_goodall
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00078,
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'clever_goodall',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2i1d4a3i |
ser-mei/borges-gpt-collab | ser-mei | 2022-11-29T17:14:30Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-11-06T20:48:40Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: borges-gpt-collab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# borges-gpt-collab
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.3468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.2135 | 0.96 | 7 | 10.2022 |
| 10.3195 | 1.96 | 14 | 9.6343 |
| 9.9127 | 2.96 | 21 | 9.4637 |
| 9.7295 | 3.96 | 28 | 9.2993 |
| 9.527 | 4.96 | 35 | 9.0962 |
| 9.2648 | 5.96 | 42 | 8.8294 |
| 8.9309 | 6.96 | 49 | 8.5103 |
| 8.5639 | 7.96 | 56 | 8.1858 |
| 8.2034 | 8.96 | 63 | 7.8816 |
| 7.8665 | 9.96 | 70 | 7.6303 |
| 7.5715 | 10.96 | 77 | 7.4307 |
| 7.3259 | 11.96 | 84 | 7.2632 |
| 7.136 | 12.96 | 91 | 7.1494 |
| 6.9558 | 13.96 | 98 | 7.0957 |
| 6.8068 | 14.96 | 105 | 7.0199 |
| 6.6656 | 15.96 | 112 | 6.9554 |
| 6.5264 | 16.96 | 119 | 6.9324 |
| 6.3843 | 17.96 | 126 | 6.8940 |
| 6.2204 | 18.96 | 133 | 6.8799 |
| 6.0915 | 19.96 | 140 | 6.8788 |
| 5.9532 | 20.96 | 147 | 6.8719 |
| 5.8169 | 21.96 | 154 | 6.8647 |
| 5.6531 | 22.96 | 161 | 6.8865 |
| 5.5125 | 23.96 | 168 | 6.8940 |
| 5.3666 | 24.96 | 175 | 6.9248 |
| 5.2377 | 25.96 | 182 | 6.9421 |
| 5.1115 | 26.96 | 189 | 6.9631 |
| 4.9639 | 27.96 | 196 | 7.0135 |
| 4.824 | 28.96 | 203 | 7.0352 |
| 4.6886 | 29.96 | 210 | 7.0729 |
| 4.5538 | 30.96 | 217 | 7.1385 |
| 4.4126 | 31.96 | 224 | 7.1561 |
| 4.2486 | 32.96 | 231 | 7.1792 |
| 4.0955 | 33.96 | 238 | 7.2767 |
| 3.9333 | 34.96 | 245 | 7.2815 |
| 3.7914 | 35.96 | 252 | 7.3463 |
| 3.618 | 36.96 | 259 | 7.3864 |
| 3.4453 | 37.96 | 266 | 7.4394 |
| 3.2795 | 38.96 | 273 | 7.4730 |
| 3.0994 | 39.96 | 280 | 7.4880 |
| 2.9143 | 40.96 | 287 | 7.5567 |
| 2.741 | 41.96 | 294 | 7.5451 |
| 2.5698 | 42.96 | 301 | 7.5966 |
| 2.3855 | 43.96 | 308 | 7.6898 |
| 2.2059 | 44.96 | 315 | 7.6957 |
| 2.0634 | 45.96 | 322 | 7.7503 |
| 1.8719 | 46.96 | 329 | 7.8369 |
| 1.7059 | 47.96 | 336 | 7.8411 |
| 1.54 | 48.96 | 343 | 7.8316 |
| 1.3768 | 49.96 | 350 | 7.8630 |
| 1.2177 | 50.96 | 357 | 7.9360 |
| 1.0663 | 51.96 | 364 | 7.9886 |
| 0.9569 | 52.96 | 371 | 8.0187 |
| 0.8281 | 53.96 | 378 | 8.0274 |
| 0.7074 | 54.96 | 385 | 8.1010 |
| 0.6095 | 55.96 | 392 | 8.1594 |
| 0.5262 | 56.96 | 399 | 8.1010 |
| 0.4678 | 57.96 | 406 | 8.1440 |
| 0.4105 | 58.96 | 413 | 8.1638 |
| 0.3766 | 59.96 | 420 | 8.1534 |
| 0.3425 | 60.96 | 427 | 8.1980 |
| 0.321 | 61.96 | 434 | 8.2184 |
| 0.3061 | 62.96 | 441 | 8.2499 |
| 0.2852 | 63.96 | 448 | 8.1690 |
| 0.2698 | 64.96 | 455 | 8.2160 |
| 0.2628 | 65.96 | 462 | 8.2616 |
| 0.2619 | 66.96 | 469 | 8.2948 |
| 0.2544 | 67.96 | 476 | 8.3553 |
| 0.2414 | 68.96 | 483 | 8.3712 |
| 0.2177 | 69.96 | 490 | 8.3468 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+rocm5.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
SALT-NLP/FLANG-BERT | SALT-NLP | 2022-11-29T17:06:37Z | 83 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Financial Language Modelling",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-06-24T02:37:04Z |
---
language: "en"
tags:
- Financial Language Modelling
widget:
- text: "Stocks rallied and the British pound [MASK]."
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-BERT
FLANG-BERT is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the BERT language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://paperswithcode.com/dataset/fin)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the model with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-BERT related issues and questions.
---
license: afl-3.0
--- |
kejian/debug-pt-conditional | kejian | 2022-11-29T15:03:05Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2022-11-29T14:52:56Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: debug-pt-conditional
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug-pt-conditional
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.1,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 64,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 128,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 128,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'debug-pt-conditional',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 8,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 10,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/3my099dp |
KPEKEP/rugpt_chitchat | KPEKEP | 2022-11-29T14:48:36Z | 42 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"license:unlicense",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-11-29T14:48:34Z | ---
pipeline_tag: text-generation
tags:
- PyTorch
- Transformers
- gpt2
license: unlicense
language: ru
widget:
- text: >-
- Π£ ΠΠΆΡΠ»ΡΠ΅ΡΡΡ Π±ΡΠ»ΠΎ 7 ΠΏΠΎΠ½ΡΠΈΠΊΠΎΠ², Π° ΠΏΠΎΡΠΎΠΌ ΠΎΠ½Π° 3 ΡΡΠ΅Π»Π°. Π‘ΠΊΠΎΠ»ΡΠΊΠΎ Ρ Π½Π΅Π΅ ΠΎΡΡΠ°Π»ΠΎΡΡ
ΠΏΠΎΠ½ΡΠΈΠΊΠΎΠ²? -
- text: >-
- ΠΠΎΠ³Π»Π°ΠΆΠ΅Π½ΠΎ 4 ΠΌΠ°Π½ΡΠ»Π°. ΠΡΡΠ°Π»ΠΎΡΡ ΠΏΠΎΠ³Π»Π°Π΄ΠΈΡΡ 6. Π‘ΠΊΠΎΠ»ΡΠΊΠΎ Π²ΡΠ΅Π³ΠΎ ΠΌΠ°Π½ΡΠ»ΠΎΠ² Π½Π°Π΄ΠΎ
ΠΏΠΎΠ³Π»Π°Π΄ΠΈΡΡ? -
- text: '- ΠΠ»Ρ Π½Π°ΡΠ°Π»Π° ΡΠΊΠ°ΠΆΠΈ, ΡΠ΅ΠΌΡ ΡΠ°Π²Π½ΠΎ ΠΏΡΡΡΡ Π΄Π΅Π²ΡΡΡ? -'
- text: '- ΡΡ ΡΡ ΡΠ°ΠΊΠΎΠΉ Π±ΠΎΡΠ·ΡΠΉ? -'
- text: '- ΠΡΠΈΠ²Π΅Ρ! ΠΠ°ΠΊ Π²Π°ΡΠ΅ Π½ΠΈΡΠ΅Π³ΠΎ? -'
duplicated_from: inkoziev/rugpt_chitchat
---
## Russian Chit-chat, Deductive and Common Sense reasoning model
ΠΠΎΠ΄Π΅Π»Ρ ΡΠ²Π»ΡΠ΅ΡΡΡ ΡΠ΄ΡΠΎΠΌ ΠΏΡΠΎΡΠΎΡΠΈΠΏΠ° [Π΄ΠΈΠ°Π»ΠΎΠ³ΠΎΠ²ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΡ](https://github.com/Koziev/chatbot) Ρ Π΄Π²ΡΠΌΡ ΠΎΡΠ½ΠΎΠ²Π½ΡΠΌΠΈ ΡΡΠ½ΠΊΡΠΈΡΠΌΠΈ.
ΠΠ΅ΡΠ²Π°Ρ ΡΡΠ½ΠΊΡΠΈΡ - **Π³Π΅Π½Π΅ΡΠ°ΡΠΈΡ ΡΠ΅ΠΏΠ»ΠΈΠΊ ΡΠΈΡ-ΡΠ°ΡΠ°**. Π ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ Π·Π°ΡΡΠ°Π²ΠΊΠΈ ΠΏΠΎΠ΄Π°Π΅ΡΡΡ ΠΈΡΡΠΎΡΠΈΡ Π΄ΠΈΠ°Π»ΠΎΠ³Π° (ΠΏΡΠ΅Π΄ΡΠ΅ΡΡΠ²ΡΡΡΠΈΠ΅ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΡΠ΅ΠΏΠ»ΠΈΠΊ, ΠΎΡ 1 Π΄ΠΎ 10).
```
- ΠΡΠΈΠ²Π΅Ρ, ΠΊΠ°ΠΊ Π΄Π΅Π»Π°?
- ΠΡΠΈΠ²Π΅Ρ, ΡΠ°ΠΊ ΡΠ΅Π±Π΅.
- <<< ΡΡΡ ΡΠ΅ΠΏΠ»ΠΈΠΊΡ ΠΎΠΆΠΈΠ΄Π°Π΅ΠΌ ΠΎΡ ΠΌΠΎΠ΄Π΅Π»ΠΈ >>>
```
ΠΡΠΎΡΠ°Ρ ΡΡΠ½ΠΊΡΠΈΡ ΠΌΠΎΠ΄Π΅Π»ΠΈ - Π²ΡΠ²ΠΎΠ΄ ΠΎΡΠ²Π΅ΡΠ° Π½Π° Π·Π°Π΄Π°Π½Π½ΡΠΉ Π²ΠΎΠΏΡΠΎΡ, ΠΎΠΏΠΈΡΠ°ΡΡΡ Π½Π° Π΄ΠΎΠΏΠΎΠ»Π½ΠΈΡΠ΅Π»ΡΠ½ΡΠ΅ ΡΠ°ΠΊΡΡ ΠΈΠ»ΠΈ Π½Π° "Π·Π΄ΡΠ°Π²ΡΠΉ ΡΠΌΡΡΠ»". ΠΡΠ΅Π΄ΠΏΠΎΠ»Π°Π³Π°Π΅ΡΡΡ, ΡΡΠΎ ΡΠ΅Π»Π΅Π²Π°Π½ΡΠ½ΡΠ΅ ΡΠ°ΠΊΡΡ ΠΈΠ·Π²Π»Π΅ΠΊΠ°ΡΡΡΡ
ΠΈΠ· ΡΡΠΎΡΠΎΠ½Π½Π΅Π³ΠΎ Ρ
ΡΠ°Π½ΠΈΠ»ΠΈΡΠ° (Π±Π°Π·Ρ Π·Π½Π°Π½ΠΈΠΉ) Ρ ΠΏΠΎΠΌΠΎΡΡΡ Π΄ΡΡΠ³ΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ, Π½Π°ΠΏΡΠΈΠΌΠ΅Ρ [sbert_pq](https://huggingface.co/inkoziev/sbert_pq).
ΠΡΠΏΠΎΠ»ΡΠ·ΡΡ ΡΠΊΠ°Π·Π°Π½Π½ΡΠΉ ΡΠ°ΠΊΡ(Ρ) ΠΈ ΡΠ΅ΠΊΡΡ Π²ΠΎΠΏΡΠΎΡΠ°, ΠΌΠΎΠ΄Π΅Π»Ρ ΠΏΠΎΡΡΡΠΎΠΈΡ Π³ΡΠ°ΠΌΠΌΠ°ΡΠΈΡΠ½ΡΠΉ ΠΈ ΠΌΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½ΠΎ ΠΊΡΠ°ΡΠΊΠΈΠΉ ΠΎΡΠ²Π΅Ρ, ΠΊΠ°ΠΊ ΡΡΠΎ ΡΠ΄Π΅Π»Π°Π» Π±Ρ
ΡΠ΅Π»ΠΎΠ²Π΅ΠΊ Π² ΠΏΠΎΠ΄ΠΎΠ±Π½ΠΎΠΉ ΠΊΠΎΠΌΠΌΡΠ½ΠΈΠΊΠ°ΡΠΈΠ²Π½ΠΎΠΉ ΡΠΈΡΡΠ°ΡΠΈΠΈ. Π Π΅Π»Π΅Π²Π°Π½ΡΠ½ΡΠ΅ ΡΠ°ΠΊΡΡ ΡΠ»Π΅Π΄ΡΠ΅Ρ ΡΠΊΠ°Π·ΡΠ²Π°ΡΡ ΠΏΠ΅ΡΠ΅Π΄ ΡΠ΅ΠΊΡΡΠΎΠΌ Π·Π°Π΄Π°Π½Π½ΠΎΠ³ΠΎ Π²ΠΎΠΏΡΠΎΡΠ° ΡΠ°ΠΊ,
Π±ΡΠ΄ΡΠΎ ΡΠ°ΠΌ ΡΠΎΠ±Π΅ΡΠ΅Π΄Π½ΠΈΠΊ ΡΠΊΠ°Π·Π°Π» ΠΈΡ
:
```
- Π‘Π΅Π³ΠΎΠ΄Π½Ρ 15 ΡΠ΅Π½ΡΡΠ±ΡΡ. ΠΠ°ΠΊΠΎΠΉ ΡΠ΅ΠΉΡΠ°Ρ Ρ Π½Π°Ρ ΠΌΠ΅ΡΡΡ?
- Π‘Π΅Π½ΡΡΠ±ΡΡ
```
ΠΠΎΠ΄Π΅Π»Ρ Π½Π΅ ΠΎΠΆΠΈΠ΄Π°Π΅Ρ, ΡΡΠΎ Π²ΡΠ΅ Π½Π°ΠΉΠ΄Π΅Π½Π½ΡΠ΅ ΠΈ Π΄ΠΎΠ±Π°Π²Π»Π΅Π½Π½ΡΠ΅ Π² ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡ Π΄ΠΈΠ°Π»ΠΎΠ³Π° ΡΠ°ΠΊΡΡ Π΄Π΅ΠΉΡΡΠ²ΠΈΡΠ΅Π»ΡΠ½ΠΎ ΠΈΠΌΠ΅ΡΡ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΠ΅ ΠΊ Π·Π°Π΄Π°Π½Π½ΠΎΠΌΡ Π²ΠΎΠΏΡΠΎΡΡ. ΠΠΎΡΡΠΎΠΌΡ
ΠΌΠΎΠ΄Π΅Π»Ρ, ΠΈΠ·Π²Π»Π΅ΠΊΠ°ΡΡΠ°Ρ ΠΈΠ· Π±Π°Π·Ρ Π·Π½Π°Π½ΠΈΠΉ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ, ΠΌΠΎΠΆΠ΅Ρ ΠΆΠ΅ΡΡΠ²ΠΎΠ²Π°ΡΡ ΡΠΎΡΠ½ΠΎΡΡΡΡ Π² ΠΏΠΎΠ»ΡΠ·Ρ ΠΏΠΎΠ»Π½ΠΎΡΠ΅ ΠΈ Π΄ΠΎΠ±Π°Π²Π»ΡΡΡ ΡΡΠΎ-ΡΠΎ Π»ΠΈΡΠ½Π΅Π΅. ΠΠΎΠ΄Π΅Π»Ρ ΡΠΈΡΡΠ°ΡΠ°
Π² ΡΡΠΎΠΌ ΡΠ»ΡΡΠ°Π΅ ΡΠ°ΠΌΠ° Π²ΡΠ±Π΅ΡΠ΅Ρ ΡΡΠ΅Π΄ΠΈ Π΄ΠΎΠ±Π°Π²Π»Π΅Π½Π½ΡΡ
Π² ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡ ΡΠ°ΠΊΡΠΎΠ² Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΡΡ ΡΠ°ΠΊΡΡΡΡ ΠΈ ΠΏΡΠΎΠΈΠ³Π½ΠΎΡΠΈΡΡΠ΅Ρ Π»ΠΈΡΠ½Π΅Π΅. Π’Π΅ΠΊΡΡΠ°Ρ Π²Π΅ΡΡΠΈΡ ΠΌΠΎΠ΄Π΅Π»ΠΈ
Π΄ΠΎΠΏΡΡΠΊΠ°Π΅Ρ Π΄ΠΎ 5 ΡΠ°ΠΊΡΠΎΠ² ΠΏΠ΅ΡΠ΅Π΄ Π²ΠΎΠΏΡΠΎΡΠΎΠΌ. ΠΠ°ΠΏΡΠΈΠΌΠ΅Ρ:
```
- Π‘ΡΠ°ΡΡ 16 Π»Π΅Ρ. Π‘ΡΠ°Ρ ΠΆΠΈΠ²Π΅Ρ Π² ΠΠΎΠ΄ΠΎΠ»ΡΡΠΊΠ΅. Π£ Π‘ΡΠ°ΡΠ° Π½Π΅Ρ ΡΠ²ΠΎΠ΅ΠΉ ΠΌΠ°ΡΠΈΠ½Ρ. ΠΠ΄Π΅ ΠΆΠΈΠ²Π΅Ρ Π‘ΡΠ°Ρ?
- Π² ΠΠΎΠ΄ΠΎΠ»ΡΡΠΊΠ΅
```
Π Π½Π΅ΠΊΠΎΡΠΎΡΡΡ
ΡΠ»ΡΡΠ°ΡΡ
ΠΌΠΎΠ΄Π΅Π»Ρ ΠΌΠΎΠΆΠ΅Ρ Π²ΡΠΏΠΎΠ»Π½ΡΡΡ **ΡΠΈΠ»Π»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΈΠΉ Π²ΡΠ²ΠΎΠ΄** ΠΎΡΠ²Π΅ΡΠ°, ΠΎΠΏΠΈΡΠ°ΡΡΡ Π½Π° 2 ΠΏΡΠ΅Π΄ΠΏΠΎΡΡΠ»ΠΊΠΈ, ΡΠ²ΡΠ·Π°Π½Π½ΡΠ΅ Π΄ΡΡΠ³ Ρ Π΄ΡΡΠ³ΠΎΠΌ. ΠΡΠ²ΠΎΠ΄ΠΈΠΌΠΎΠ΅ ΠΈΠ· Π΄Π²ΡΡ
ΠΏΡΠ΅Π΄ΠΏΠΎΡΡΠ»ΠΎΠΊ ΡΠ»Π΅Π΄ΡΡΠ²ΠΈΠ΅ Π½Π΅ ΡΠΈΠ³ΡΡΠΈΡΡΠ΅Ρ ΡΠ²Π½ΠΎ, Π° *ΠΊΠ°ΠΊ Π±Ρ* ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ Π΄Π»Ρ Π²ΡΠ²ΠΎΠ΄Π° ΠΎΡΠ²Π΅ΡΠ°:
```
- Π‘ΠΌΠ΅ΡΡΠ΅Π½ Π»ΠΈ ΠΡΠΈΡΡΠΎΡΠ°Π½, Π΅ΡΠ»ΠΈ ΠΎΠ½ Π±ΡΠ» Π³ΡΠ΅ΡΠ΅ΡΠΊΠΈΠΌ ΡΠΈΠ»ΠΎΡΠΎΡΠΎΠΌ, Π° Π²ΡΠ΅ ΡΠΈΠ»ΠΎΡΠΎΡΡ ΡΠΌΠ΅ΡΡΠ½Ρ?
- ΠΠ°
```
ΠΠ°ΠΊ ΠΌΠΎΠΆΠ½ΠΎ Π²ΠΈΠ΄Π΅ΡΡ ΠΈΠ· ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½Π½ΡΡ
ΠΏΡΠΈΠΌΠ΅ΡΠΎΠ², ΡΠΎΡΠΌΠ°Ρ ΠΏΠΎΠ΄Π°Π²Π°Π΅ΠΌΠΎΠΉ Π½Π° Π²Ρ
ΠΎΠ΄ ΠΌΠΎΠ΄Π΅Π»ΠΈ ΡΠ°ΠΊΡΠΈΡΠ΅ΡΠΊΠΎΠΉ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ Π΄Π»Ρ Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ Π²ΡΠ²ΠΎΠ΄Π° ΠΏΡΠ΅Π΄Π΅Π»ΡΠ½ΠΎ Π΅ΡΡΠ΅ΡΡΠ²Π΅Π½Π½ΡΠΉ ΠΈ ΡΠ²ΠΎΠ±ΠΎΠ΄Π½ΡΠΉ.
ΠΡΠΎΠΌΠ΅ Π»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ Π²ΡΠ²ΠΎΠ΄Π°, ΠΌΠΎΠ΄Π΅Π»Ρ ΡΠ°ΠΊΠΆΠ΅ ΡΠΌΠ΅Π΅Ρ ΡΠ΅ΡΠ°ΡΡ ΠΏΡΠΎΡΡΡΠ΅ Π°ΡΠΈΡΠΌΠ΅ΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ Π·Π°Π΄Π°ΡΠΈ Π² ΡΠ°ΠΌΠΊΠ°Ρ
1-2 ΠΊΠ»Π°ΡΡΠΎΠ² Π½Π°ΡΠ°Π»ΡΠ½ΠΎΠΉ ΡΠΊΠΎΠ»Ρ, Ρ Π΄Π²ΡΠΌΡ ΡΠΈΡΠ»ΠΎΠ²ΡΠΌΠΈ Π°ΡΠ³ΡΠΌΠ΅Π½ΡΠ°ΠΌΠΈ:
```
- Π§Π΅ΠΌΡ ΡΠ°Π²Π½ΠΎ 2+8?
- 10
```
### ΠΠ°ΡΠΈΠ°Π½ΡΡ ΠΌΠΎΠ΄Π΅Π»ΠΈ ΠΈ ΠΌΠ΅ΡΡΠΈΠΊΠΈ
ΠΡΠ»ΠΎΠΆΠ΅Π½Π½Π°Ρ Π½Π° Π΄Π°Π½Π½ΡΠΉ ΠΌΠΎΠΌΠ΅Π½Ρ ΠΌΠΎΠ΄Π΅Π»Ρ ΠΈΠΌΠ΅Π΅Ρ 760 ΠΌΠ»Π½. ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΠΎΠ², Ρ.Π΅. ΡΡΠΎΠ²Π½Ρ sberbank-ai/rugpt3large_based_on_gpt2. ΠΠ°Π»Π΅Π΅ ΠΏΡΠΈΠ²ΠΎΠ΄ΠΈΡΡΡ
ΡΠ΅Π·ΡΠ»ΡΡΠ°Ρ Π·Π°ΠΌΠ΅ΡΠ° ΡΠΎΡΠ½ΠΎΡΡΠΈ ΡΠ΅ΡΠ΅Π½ΠΈΡ Π°ΡΠΈΡΠΌΠ΅ΡΠΈΡΠ΅ΡΠΊΠΈΡ
Π·Π°Π΄Π°Ρ Π½Π° ΠΎΡΠ»ΠΎΠΆΠ΅Π½Π½ΠΎΠΌ ΡΠ΅ΡΡΠΎΠ²ΠΎΠΌ Π½Π°Π±ΠΎΡΠ΅ ΡΡΠΌΠΏΠ»ΠΎΠ²:
| base model | arith. accuracy |
| --------------------------------------- | --------------- |
| sberbank-ai/rugpt3large_based_on_gpt2 | 0.91 |
| sberbank-ai/rugpt3medium_based_on_gpt2 | 0.70 |
| sberbank-ai/rugpt3small_based_on_gpt2 | 0.58 |
| tinkoff-ai/ruDialoGPT-small | 0.44 |
| tinkoff-ai/ruDialoGPT-medium | 0.69 |
Π¦ΠΈΡΡΠ° 0.91 Π² ΡΡΠΎΠ»Π±ΡΠ΅ "arith. accuracy" ΠΎΠ·Π½Π°ΡΠ°Π΅Ρ, ΡΡΠΎ 91% ΡΠ΅ΡΡΠΎΠ²ΡΡ
Π·Π°Π΄Π°Ρ ΡΠ΅ΡΠ΅Π½ΠΎ ΠΏΠΎΠ»Π½ΠΎΡΡΡΡ Π²Π΅ΡΠ½ΠΎ.
ΠΡΠ±ΠΎΠ΅ ΠΎΡΠΊΠ»ΠΎΠ½Π΅Π½ΠΈΠ΅ ΡΠ³Π΅Π½Π΅ΡΠΈΡΠΎΠ²Π°Π½Π½ΠΎΠ³ΠΎ ΠΎΡΠ²Π΅ΡΠ° ΠΎΡ ΡΡΠ°Π»ΠΎΠ½Π½ΠΎΠ³ΠΎ ΡΠ°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°Π΅ΡΡΡ
ΠΊΠ°ΠΊ ΠΎΡΠΈΠ±ΠΊΠ°. ΠΠ°ΠΏΡΠΈΠΌΠ΅Ρ, Π²ΡΠ΄Π°ΡΠ° ΠΎΡΠ²Π΅ΡΠ° "120" Π²ΠΌΠ΅ΡΡΠΎ "119" ΡΠΎΠΆΠ΅ ΡΠΈΠΊΡΠΈΡΡΠ΅ΡΡΡ ΠΊΠ°ΠΊ ΠΎΡΠΈΠ±ΠΊΠ°.
### ΠΡΠΈΠΌΠ΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "inkoziev/rugpt_chitchat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.add_special_tokens({'bos_token': '<s>', 'eos_token': '</s>', 'pad_token': '<pad>'})
model = AutoModelForCausalLM.from_pretrained(model_name)
model.to(device)
model.eval()
# ΠΠ° Π²Ρ
ΠΎΠ΄ ΠΌΠΎΠ΄Π΅Π»ΠΈ ΠΏΠΎΠ΄Π°Π΅ΠΌ ΠΏΠΎΡΠ»Π΅Π΄Π½ΠΈΠ΅ 2-3 ΡΠ΅ΠΏΠ»ΠΈΠΊΠΈ Π΄ΠΈΠ°Π»ΠΎΠ³Π°. ΠΠ°ΠΆΠ΄Π°Ρ ΡΠ΅ΠΏΠ»ΠΈΠΊΠ° Π½Π° ΠΎΡΠ΄Π΅Π»ΡΠ½ΠΎΠΉ ΡΡΡΠΎΠΊΠ΅, Π½Π°ΡΠΈΠ½Π°Π΅ΡΡΡ Ρ ΡΠΈΠΌΠ²ΠΎΠ»Π° "-"
input_text = """<s>- ΠΡΠΈΠ²Π΅Ρ! Π§ΡΠΎ Π΄Π΅Π»Π°Π΅ΡΡ?
- ΠΡΠΈΠ²Π΅Ρ :) Π ΡΠ°ΠΊΡΠΈ Π΅Π΄Ρ
-"""
encoded_prompt = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt").to(device)
output_sequences = model.generate(input_ids=encoded_prompt, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id)
text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text)+1:]
text = text[: text.find('</s>')]
print(text)
```
### ΠΠΎΠ½ΡΠ°ΠΊΡΡ
ΠΡΠ»ΠΈ Ρ ΠΠ°Ρ Π΅ΡΡΡ ΠΊΠ°ΠΊΠΈΠ΅-ΡΠΎ Π²ΠΎΠΏΡΠΎΡΡ ΠΏΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΡΡΠΎΠΉ ΠΌΠΎΠ΄Π΅Π»ΠΈ, ΠΈΠ»ΠΈ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΈΡ ΠΏΠΎ Π΅Π΅ ΡΠ»ΡΡΡΠ΅Π½ΠΈΡ - ΠΏΠΈΡΠΈΡΠ΅ ΠΌΠ½Π΅ [email protected]
### Citation:
```
@MISC{rugpt_chitchat,
author = {Ilya Koziev},
title = {Russian Chit-chat with Common sence Reasoning},
url = {https://huggingface.co/inkoziev/rugpt_chitchat},
year = 2022
}
```
|
deblagoj/xlm-roberta-base-finetuned-panx-de | deblagoj | 2022-11-29T14:40:06Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-11-29T14:12:37Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.86520554167613
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1684
- F1: 0.8652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2655 | 1.0 | 2097 | 0.1958 | 0.8283 |
| 0.1479 | 2.0 | 4194 | 0.1581 | 0.8505 |
| 0.0852 | 3.0 | 6291 | 0.1684 | 0.8652 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.0+cu117
- Datasets 1.16.1
- Tokenizers 0.10.3
|
multimodalart/polisteps-768 | multimodalart | 2022-11-29T14:26:55Z | 21 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-11-29T14:25:16Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### polisteps 768 Dreambooth model trained by multimodalart with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-768 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
plstpz (use that on your prompt)

|
thliang01/sd-class-butterflies-64 | thliang01 | 2022-11-29T14:23:01Z | 37 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-29T14:22:36Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(thliang01/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
jenniferjjc/roberta-base-bne-finetuned-amazon_reviews_multi | jenniferjjc | 2022-11-29T14:05:58Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-29T13:43:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: train
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93275
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2223
- Accuracy: 0.9327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1945 | 1.0 | 1250 | 0.1731 | 0.9335 |
| 0.1004 | 2.0 | 2500 | 0.2223 | 0.9327 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Evolett/rubert-tiny2-finetuned-ner | Evolett | 2022-11-29T13:55:33Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-11-29T09:43:37Z | ---
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: rubert-tiny2-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.7137235200535879
- name: Recall
type: recall
value: 0.7270556124189697
- name: F1
type: f1
value: 0.7203278827058774
- name: Accuracy
type: accuracy
value: 0.9363443855435385
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2-finetuned-ner
This model was trained from scratch on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2259
- Precision: 0.7137
- Recall: 0.7271
- F1: 0.7203
- Accuracy: 0.9363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6327 | 1.0 | 878 | 0.3218 | 0.6068 | 0.6009 | 0.6038 | 0.9114 |
| 0.2937 | 2.0 | 1756 | 0.2434 | 0.6864 | 0.7013 | 0.6938 | 0.9307 |
| 0.2357 | 3.0 | 2634 | 0.2259 | 0.7137 | 0.7271 | 0.7203 | 0.9363 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sayby/q-Taxi-v3 | sayby | 2022-11-29T13:45:48Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-11-29T13:36:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.66 +/- 2.55
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sayby/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
kaizerkam/sd-class-comics-64 | kaizerkam | 2022-11-29T13:26:50Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-29T13:25:39Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of comic scenes.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(kaizerkam/sd-class-comics-64)
image = pipeline().images[0]
image
```
|
pig4431/rtm_roBERTa_5E | pig4431 | 2022-11-29T12:34:52Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-29T11:02:18Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
model-index:
- name: rtm_roBERTa_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtm_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6545
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6955 | 0.09 | 50 | 0.6752 | 0.7867 |
| 0.5362 | 0.19 | 100 | 0.4314 | 0.8333 |
| 0.4065 | 0.28 | 150 | 0.4476 | 0.8533 |
| 0.3563 | 0.37 | 200 | 0.3454 | 0.8467 |
| 0.3729 | 0.47 | 250 | 0.3421 | 0.86 |
| 0.3355 | 0.56 | 300 | 0.3253 | 0.8467 |
| 0.338 | 0.66 | 350 | 0.3859 | 0.8733 |
| 0.2875 | 0.75 | 400 | 0.3537 | 0.8533 |
| 0.3477 | 0.84 | 450 | 0.3636 | 0.8467 |
| 0.3259 | 0.94 | 500 | 0.3115 | 0.88 |
| 0.3204 | 1.03 | 550 | 0.4295 | 0.8333 |
| 0.2673 | 1.12 | 600 | 0.3369 | 0.88 |
| 0.2479 | 1.22 | 650 | 0.3620 | 0.8667 |
| 0.2821 | 1.31 | 700 | 0.3582 | 0.8733 |
| 0.2355 | 1.4 | 750 | 0.3130 | 0.8867 |
| 0.2357 | 1.5 | 800 | 0.3229 | 0.86 |
| 0.2725 | 1.59 | 850 | 0.3035 | 0.88 |
| 0.2425 | 1.69 | 900 | 0.3146 | 0.8533 |
| 0.1977 | 1.78 | 950 | 0.4079 | 0.86 |
| 0.2557 | 1.87 | 1000 | 0.4132 | 0.8733 |
| 0.2395 | 1.97 | 1050 | 0.3336 | 0.86 |
| 0.1951 | 2.06 | 1100 | 0.5068 | 0.84 |
| 0.1631 | 2.15 | 1150 | 0.5209 | 0.8867 |
| 0.2192 | 2.25 | 1200 | 0.4766 | 0.8733 |
| 0.1725 | 2.34 | 1250 | 0.3962 | 0.8667 |
| 0.2215 | 2.43 | 1300 | 0.4133 | 0.8867 |
| 0.1602 | 2.53 | 1350 | 0.5564 | 0.8533 |
| 0.1986 | 2.62 | 1400 | 0.5826 | 0.86 |
| 0.1972 | 2.72 | 1450 | 0.5412 | 0.8667 |
| 0.2299 | 2.81 | 1500 | 0.4636 | 0.8733 |
| 0.2028 | 2.9 | 1550 | 0.5096 | 0.8667 |
| 0.2591 | 3.0 | 1600 | 0.3790 | 0.8467 |
| 0.1197 | 3.09 | 1650 | 0.5704 | 0.8467 |
| 0.174 | 3.18 | 1700 | 0.5904 | 0.8467 |
| 0.1499 | 3.28 | 1750 | 0.6066 | 0.86 |
| 0.1687 | 3.37 | 1800 | 0.6353 | 0.8533 |
| 0.1463 | 3.46 | 1850 | 0.6434 | 0.8467 |
| 0.1373 | 3.56 | 1900 | 0.6507 | 0.8533 |
| 0.1339 | 3.65 | 1950 | 0.6014 | 0.86 |
| 0.1488 | 3.75 | 2000 | 0.7245 | 0.84 |
| 0.1725 | 3.84 | 2050 | 0.6214 | 0.86 |
| 0.1443 | 3.93 | 2100 | 0.6446 | 0.8533 |
| 0.1619 | 4.03 | 2150 | 0.6223 | 0.8533 |
| 0.1153 | 4.12 | 2200 | 0.6579 | 0.8333 |
| 0.1159 | 4.21 | 2250 | 0.6760 | 0.8667 |
| 0.0948 | 4.31 | 2300 | 0.7172 | 0.8467 |
| 0.1373 | 4.4 | 2350 | 0.7346 | 0.8467 |
| 0.1463 | 4.49 | 2400 | 0.6453 | 0.8533 |
| 0.0758 | 4.59 | 2450 | 0.6579 | 0.86 |
| 0.16 | 4.68 | 2500 | 0.6556 | 0.8667 |
| 0.112 | 4.78 | 2550 | 0.6490 | 0.88 |
| 0.1151 | 4.87 | 2600 | 0.6525 | 0.8667 |
| 0.2152 | 4.96 | 2650 | 0.6545 | 0.8667 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AlekseyKorshuk/125m-dalio-book-handwritten-io-constant-1e-6-v2 | AlekseyKorshuk | 2022-11-29T12:29:49Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-11-29T10:31:18Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2
metrics:
- accuracy
model-index:
- name: 125m-dalio-book-handwritten-io-constant-1e-6-v2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2
type: AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2
metrics:
- name: Accuracy
type: accuracy
value: 0.23359387091781458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 125m-dalio-book-handwritten-io-constant-1e-6-v2
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0859
- Accuracy: 0.2336
- Perplexity: 21.8880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Perplexity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|
| 3.3352 | 0.01 | 1 | 3.1738 | 0.2305 | 23.8988 |
| 3.3091 | 0.03 | 2 | 3.1738 | 0.2305 | 23.8988 |
| 3.3347 | 0.04 | 3 | 3.1738 | 0.2305 | 23.8988 |
| 3.1445 | 0.05 | 4 | 3.1738 | 0.2305 | 23.8988 |
| 2.8918 | 0.07 | 5 | 3.1738 | 0.2305 | 23.8988 |
| 3.2068 | 0.08 | 6 | 3.1738 | 0.2305 | 23.8988 |
| 3.6245 | 0.09 | 7 | 3.1719 | 0.2305 | 23.8522 |
| 3.2256 | 0.11 | 8 | 3.1719 | 0.2305 | 23.8522 |
| 2.9991 | 0.12 | 9 | 3.1699 | 0.2305 | 23.8056 |
| 3.3257 | 0.13 | 10 | 3.1680 | 0.2306 | 23.7592 |
| 3.1199 | 0.15 | 11 | 3.1660 | 0.2306 | 23.7128 |
| 3.3735 | 0.16 | 12 | 3.1660 | 0.2306 | 23.7128 |
| 3.0051 | 0.17 | 13 | 3.1641 | 0.2307 | 23.6665 |
| 3.2695 | 0.19 | 14 | 3.1621 | 0.2308 | 23.6204 |
| 3.2004 | 0.2 | 15 | 3.1602 | 0.2309 | 23.5743 |
| 3.2075 | 0.21 | 16 | 3.1582 | 0.2308 | 23.5283 |
| 3.321 | 0.23 | 17 | 3.1562 | 0.2308 | 23.4824 |
| 3.4026 | 0.24 | 18 | 3.1543 | 0.2309 | 23.4366 |
| 3.0383 | 0.25 | 19 | 3.1523 | 0.2309 | 23.3908 |
| 3.166 | 0.27 | 20 | 3.1504 | 0.2309 | 23.3452 |
| 3.144 | 0.28 | 21 | 3.1484 | 0.2310 | 23.2996 |
| 3.1624 | 0.29 | 22 | 3.1484 | 0.2310 | 23.2996 |
| 3.0332 | 0.31 | 23 | 3.1465 | 0.2310 | 23.2542 |
| 3.3745 | 0.32 | 24 | 3.1445 | 0.2311 | 23.2088 |
| 3.0823 | 0.33 | 25 | 3.1426 | 0.2312 | 23.1635 |
| 3.6021 | 0.35 | 26 | 3.1406 | 0.2312 | 23.1183 |
| 3.1125 | 0.36 | 27 | 3.1387 | 0.2313 | 23.0732 |
| 3.1406 | 0.37 | 28 | 3.1387 | 0.2314 | 23.0732 |
| 3.1736 | 0.39 | 29 | 3.1367 | 0.2314 | 23.0282 |
| 3.1104 | 0.4 | 30 | 3.1348 | 0.2315 | 22.9832 |
| 3.1301 | 0.41 | 31 | 3.1328 | 0.2316 | 22.9384 |
| 3.3376 | 0.43 | 32 | 3.1309 | 0.2315 | 22.8936 |
| 3.218 | 0.44 | 33 | 3.1309 | 0.2316 | 22.8936 |
| 3.0786 | 0.45 | 34 | 3.1289 | 0.2316 | 22.8490 |
| 3.0125 | 0.47 | 35 | 3.1270 | 0.2317 | 22.8044 |
| 3.2634 | 0.48 | 36 | 3.1270 | 0.2317 | 22.8044 |
| 2.9888 | 0.49 | 37 | 3.125 | 0.2318 | 22.7599 |
| 3.1624 | 0.51 | 38 | 3.1230 | 0.2318 | 22.7155 |
| 2.9807 | 0.52 | 39 | 3.1211 | 0.2319 | 22.6712 |
| 3.446 | 0.53 | 40 | 3.1211 | 0.2319 | 22.6712 |
| 3.1338 | 0.55 | 41 | 3.1191 | 0.2320 | 22.6269 |
| 3.1841 | 0.56 | 42 | 3.1191 | 0.2320 | 22.6269 |
| 3.1079 | 0.57 | 43 | 3.1172 | 0.2320 | 22.5828 |
| 3.0918 | 0.59 | 44 | 3.1152 | 0.2321 | 22.5387 |
| 3.0302 | 0.6 | 45 | 3.1152 | 0.2322 | 22.5387 |
| 3.1123 | 0.61 | 46 | 3.1133 | 0.2323 | 22.4947 |
| 2.9985 | 0.63 | 47 | 3.1113 | 0.2324 | 22.4508 |
| 3.3816 | 0.64 | 48 | 3.1113 | 0.2324 | 22.4508 |
| 3.0813 | 0.65 | 49 | 3.1094 | 0.2324 | 22.4070 |
| 3.2024 | 0.67 | 50 | 3.1094 | 0.2325 | 22.4070 |
| 3.0178 | 0.68 | 51 | 3.1074 | 0.2325 | 22.3633 |
| 3.1646 | 0.69 | 52 | 3.1074 | 0.2326 | 22.3633 |
| 3.0046 | 0.71 | 53 | 3.1055 | 0.2327 | 22.3197 |
| 3.0266 | 0.72 | 54 | 3.1055 | 0.2327 | 22.3197 |
| 3.3857 | 0.73 | 55 | 3.1035 | 0.2327 | 22.2761 |
| 3.064 | 0.75 | 56 | 3.1035 | 0.2328 | 22.2761 |
| 3.176 | 0.76 | 57 | 3.1016 | 0.2328 | 22.2327 |
| 3.1851 | 0.77 | 58 | 3.1016 | 0.2329 | 22.2327 |
| 3.0811 | 0.79 | 59 | 3.0996 | 0.2329 | 22.1893 |
| 3.0205 | 0.8 | 60 | 3.0996 | 0.2330 | 22.1893 |
| 3.26 | 0.81 | 61 | 3.0977 | 0.2330 | 22.1460 |
| 3.2922 | 0.83 | 62 | 3.0977 | 0.2331 | 22.1460 |
| 3.5349 | 0.84 | 63 | 3.0957 | 0.2331 | 22.1028 |
| 3.3525 | 0.85 | 64 | 3.0957 | 0.2331 | 22.1028 |
| 3.135 | 0.87 | 65 | 3.0938 | 0.2331 | 22.0596 |
| 3.1707 | 0.88 | 66 | 3.0938 | 0.2332 | 22.0596 |
| 3.0127 | 0.89 | 67 | 3.0918 | 0.2332 | 22.0166 |
| 3.0952 | 0.91 | 68 | 3.0918 | 0.2332 | 22.0166 |
| 3.1023 | 0.92 | 69 | 3.0898 | 0.2334 | 21.9736 |
| 3.3821 | 0.93 | 70 | 3.0898 | 0.2334 | 21.9736 |
| 3.1118 | 0.95 | 71 | 3.0879 | 0.2334 | 21.9308 |
| 3.1143 | 0.96 | 72 | 3.0879 | 0.2335 | 21.9308 |
| 3.1118 | 0.97 | 73 | 3.0879 | 0.2335 | 21.9308 |
| 3.0596 | 0.99 | 74 | 3.0859 | 0.2336 | 21.8880 |
| 3.1033 | 1.0 | 75 | 3.0859 | 0.2336 | 21.8880 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nlp-tlp/mwo-re | nlp-tlp | 2022-11-29T12:11:48Z | 4 | 0 | flair | [
"flair",
"pytorch",
"text-classification",
"text-classification-model",
"en",
"dataset:mwo_re",
"region:us"
]
| text-classification | 2022-11-29T12:09:12Z | ---
tags:
- flair
- text-classification
- text-classification-model
language: en
datasets:
- mwo_re
widget:
- text: "pump broken Item Observation pump is broken"
---
## MWO NER Test
A flair-based RE model for MWOs. There are three classes: `HAS_ACTIVITY`, `HAS_OBSERVATION`, and `APPEARS_WITH`.
|
mepi/KR-FinBert-finetuned-ner | mepi | 2022-11-29T11:43:09Z | 114 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-11-29T11:08:10Z | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: KR-FinBert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: train
args: ner
metrics:
- name: Precision
type: precision
value: 0.70817831734221
- name: Recall
type: recall
value: 0.7610296696359683
- name: F1
type: f1
value: 0.7336533910338766
- name: Accuracy
type: accuracy
value: 0.9504335292160994
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KR-FinBert-finetuned-ner
This model is a fine-tuned version of [snunlp/KR-FinBert](https://huggingface.co/snunlp/KR-FinBert) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1634
- Precision: 0.7082
- Recall: 0.7610
- F1: 0.7337
- Accuracy: 0.9504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2028 | 1.0 | 1313 | 0.1852 | 0.6650 | 0.7060 | 0.6849 | 0.9406 |
| 0.1232 | 2.0 | 2626 | 0.1627 | 0.7028 | 0.7459 | 0.7237 | 0.9487 |
| 0.0942 | 3.0 | 3939 | 0.1634 | 0.7082 | 0.7610 | 0.7337 | 0.9504 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
LuisQ/LuisQ_sd-class-butterflies-64 | LuisQ | 2022-11-29T11:43:04Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T16:21:27Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(LuisQ/LuisQ_sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
louisbetsch/tweetclassification-bf-model | louisbetsch | 2022-11-29T10:37:35Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-11-22T09:43:52Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 850 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 850,
"warmup_steps": 85,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ConvLab/t5-small-goal2dialogue-multiwoz21 | ConvLab | 2022-11-29T10:32:56Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"t5-small",
"dialogue generation",
"conversational system",
"task-oriented dialog",
"en",
"dataset:ConvLab/multiwoz21",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-11-25T07:02:25Z | ---
language:
- en
license: apache-2.0
tags:
- t5-small
- text2text-generation
- dialogue generation
- conversational system
- task-oriented dialog
datasets:
- ConvLab/multiwoz21
metrics:
- LM loss
model-index:
- name: t5-small-goal2dialogue-multiwoz21
results:
- task:
type: text2text-generation
name: dialogue generation
dataset:
type: ConvLab/multiwoz21
name: MultiWOZ 2.1
split: validation
revision: 5f55375edbfe0270c20bcf770751ad982c0e6614
metrics:
- type: Language model loss
value: 1.5253684520721436
name: LM loss
- task:
type: text2text-generation
name: dialogue generation
dataset:
type: ConvLab/multiwoz21
name: MultiWOZ 2.1
split: test
revision: 5f55375edbfe0270c20bcf770751ad982c0e6614
metrics:
- type: Language model loss
value: 1.515929937362671
name: LM loss
widget:
- text: "You are traveling to Cambridge and looking forward to try local restaurants. You are looking for a particular attraction. Its name is called nusha. Make sure you get postcode and address. You are also looking for a place to dine. The restaurant should be in the expensive price range and should serve indian food. The restaurant should be in the centre. Make sure you get address"
- text: "You want to book a taxi. The taxi should go to pizza hut fen ditton and should depart from saint john's college. The taxi should leave after 17:15. Make sure you get car type and contact number"
inference:
parameters:
max_length: 1024
---
# t5-small-goal2dialogue-multiwoz21
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21).
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/mullen_usa-nasdaq | huggingtweets | 2022-11-29T10:30:31Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-11-29T10:24:49Z | ---
language: en
thumbnail: http://www.huggingtweets.com/mullen_usa-nasdaq/1669717561312/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521140484512620544/Ev6EIPlD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433904015834705921/tRPvxdFF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nasdaq & Mullen Automotive</div>
<div style="text-align: center; font-size: 14px;">@mullen_usa-nasdaq</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nasdaq & Mullen Automotive.
| Data | Nasdaq | Mullen Automotive |
| --- | --- | --- |
| Tweets downloaded | 3250 | 963 |
| Retweets | 663 | 188 |
| Short tweets | 31 | 121 |
| Tweets kept | 2556 | 654 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/352xmu00/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mullen_usa-nasdaq's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/x3hx0rfr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/x3hx0rfr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mullen_usa-nasdaq')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
JulianBons/sd-class-butterflies-32 | JulianBons | 2022-11-29T10:23:38Z | 39 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-29T10:23:10Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(JulianBons/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
renesteeman/whisper-tiny-dutch-25 | renesteeman | 2022-11-29T10:20:09Z | 80 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"nl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-11-29T08:26:14Z | ---
language:
- nl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny Dutch 25
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: nl, split: test'
metrics:
- name: Wer
type: wer
value: 42.065535920433355
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Dutch 25
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7024
- Wer: 42.0655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5563 | 0.78 | 500 | 0.7838 | 47.5002 |
| 0.3949 | 1.56 | 1000 | 0.7301 | 43.9570 |
| 0.2666 | 2.34 | 1500 | 0.7103 | 42.8426 |
| 0.2307 | 3.12 | 2000 | 0.7024 | 42.0655 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
SiriRRR/bart-base-finetuned-test | SiriRRR | 2022-11-29T09:26:23Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-11-29T09:19:55Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: SiriRRR/bart-base-finetuned-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SiriRRR/bart-base-finetuned-test
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5900
- Validation Loss: 2.6982
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 2864, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4667 | 2.1935 | 0 |
| 1.7786 | 2.2691 | 1 |
| 1.4244 | 2.3324 | 2 |
| 1.1479 | 2.4362 | 3 |
| 0.9405 | 2.5442 | 4 |
| 0.7770 | 2.5797 | 5 |
| 0.6615 | 2.6505 | 6 |
| 0.5900 | 2.6982 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
SayaEndo/distilbert-base-uncased-finetuned-squad-d5716d28 | SayaEndo | 2022-11-29T08:56:00Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-11-29T08:44:02Z | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
pig4431/rtm_fewshot | pig4431 | 2022-11-29T08:30:05Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-11-29T08:29:50Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 800,
"warmup_steps": 80,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
regisss/t5-3b-summarization-gaudi-2 | regisss | 2022-11-29T08:15:35Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"optimum_habana",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-11-28T19:53:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: t5-3b-summarization-gaudi-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-3b-summarization-gaudi-2
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.0a0+git7392344
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pig4431/YELP_fewshot | pig4431 | 2022-11-29T08:08:51Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-11-29T08:08:37Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 800,
"warmup_steps": 80,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
premsuresh/bart-finetuned-mathqa-mohith | premsuresh | 2022-11-29T08:05:32Z | 176 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-11-29T07:36:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-mathqa-mohith
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-mathqa-mohith
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
nagais/sd-class-butterflies-32 | nagais | 2022-11-29T07:06:12Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-29T06:51:12Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(nagais/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
MadhuG/vit-base-patch16-224-in21k-lung_cancer | MadhuG | 2022-11-29T06:41:28Z | 76 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-11-29T05:33:48Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MadhuG/vit-base-patch16-224-in21k-lung_cancer
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MadhuG/vit-base-patch16-224-in21k-lung_cancer
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1061
- Train Accuracy: 0.1041
- Validation Loss: 1.1028
- Validation Accuracy: 0.1394
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 1.1061 | 0.1041 | 1.1028 | 0.1394 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
smilton/mt5-large-qasrl-es-p1-role | smilton | 2022-11-29T06:01:48Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-11-29T05:47:34Z | ---
language:
- es
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5-large-qasrl-es-p1-role
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-large-qasrl-es-p1-role
This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.11.0
- Datasets 2.7.1
- Tokenizers 0.11.0
|
laroy23/ddpm-butterflies-128 | laroy23 | 2022-11-29T04:33:59Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-11-28T13:56:37Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: ./cifar-10-batches-py
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [π€ Diffusers](https://github.com/huggingface/diffusers) library
on the `./cifar-10-batches-py` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
π [TensorBoard logs](https://huggingface.co/laroy23/ddpm-butterflies-128/tensorboard?#scalars)
|
elRivx/gAWoman | elRivx | 2022-11-29T04:33:34Z | 0 | 2 | null | [
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2022-11-29T04:22:28Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# gAWoman
This is my second Stable Diffusion custom model that bring to you a generic woman generated with non-licenced images.
The magic word is: gAWoman
If you enjoy my work, please consider supporting me:
[](https://www.buymeacoffee.com/elrivx)
Examples:
<img src=https://imgur.com/B5XkfuG.png width=30% height=30%>
<img src=https://imgur.com/N8lNtZo.png width=30% height=30%>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
NSandra/distilbert-base-uncased-finetuned-ner | NSandra | 2022-11-29T04:09:17Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-11-29T03:55:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2393
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 1 | 1.5491 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 2.0 | 2 | 1.3278 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 3.0 | 3 | 1.2393 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
tomekkorbak/amazing_payne | tomekkorbak | 2022-11-29T03:28:47Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
]
| null | 2022-11-29T03:28:38Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: amazing_payne
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazing_payne
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00065,
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'amazing_payne',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/jfkodfu1 |
JiHoon-kim/bert-base-klue-ynat-finetuned | JiHoon-kim | 2022-11-29T03:25:05Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"mrc",
"ko",
"dataset:klue",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-29T03:21:37Z | ---
language: ko
tags:
- bert
- mrc
datasets:
- klue
license: cc-by-sa-4.0
---
# μΈνλ° κ°μμ© checkpoint
KLUEμ YNAT taskμ νμΈνλλ λͺ¨λΈμ
λλ€. |
jeraldflowers/distilroberts-base-mrpc-glue-jeraldflowers | jeraldflowers | 2022-11-29T02:57:36Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-28T05:30:00Z | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: distilroberts-base-mrpc-glue-jeraldflowers
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
- name: F1
type: f1
value: 0.8814814814814815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberts-base-mrpc-glue-jeraldflowers
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4990
- Accuracy: 0.8431
- F1: 0.8815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5289 | 1.09 | 500 | 0.5668 | 0.8211 | 0.8689 |
| 0.3675 | 2.18 | 1000 | 0.4990 | 0.8431 | 0.8815 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
npark/asr-conformer-ksponspeech | npark | 2022-11-29T02:25:40Z | 5 | 1 | null | [
"region:us"
]
| null | 2022-11-29T01:26:29Z | # KsponSpeech ASR with Transformers
This repository provides pretrained end-to-end ASR models on KsponSpeech with Speechbrain v0.5.13.
Model files in this repository trained using the files is below URL, but in Speechbrain version 0.5.13.
https://github.com/speechbrain/speechbrain/tree/develop/recipes/KsponSpeech/ASR/transformer
language:
- "ko"
- ko
datasets:
- KsponSpeech
## About SpeechBrain
* Website: https://speechbrain.github.io/
* Code: https://github.com/speechbrain/speechbrain/
* HuggingFace: https://huggingface.co/speechbrain/
|
neulab/omnitab-large-finetuned-wtq | neulab | 2022-11-29T02:11:26Z | 4,399 | 7 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| table-question-answering | 2022-10-26T00:56:04Z | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-finetuned-wtq` (based on BART architecture) is initialized with `neulab/omnitab-large` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-finetuned-wtq")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-finetuned-wtq")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
neulab/omnitab-large-16shot-finetuned-wtq-16shot | neulab | 2022-11-29T02:10:07Z | 52 | 1 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| table-question-answering | 2022-11-29T01:48:24Z | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-16shot-finetuned-wtq-16shot` (based on BART architecture) is initialized with `neulab/omnitab-large-16shot` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) in the 16-shot setting.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-16shot-finetuned-wtq-16shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-16shot-finetuned-wtq-16shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
Deigant/t5-base-finetuned-qg-context-dataset-2-hard-medium | Deigant | 2022-11-29T01:57:15Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-11-29T01:10:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-qg-context-dataset-2-hard-medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-qg-context-dataset-2-hard-medium
This model is a fine-tuned version of [Deigant/t5-base-finetuned-qg-context-dataset-2](https://huggingface.co/Deigant/t5-base-finetuned-qg-context-dataset-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1877
- Rouge1: 27.9067
- Rouge2: 6.8779
- Rougel: 24.6502
- Rougelsum: 24.7749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 73 | 2.1134 | 27.571 | 8.3183 | 25.3973 | 25.2743 |
| No log | 2.0 | 146 | 2.0800 | 28.4972 | 9.7451 | 26.9093 | 26.7337 |
| No log | 3.0 | 219 | 2.0406 | 21.4309 | 5.817 | 19.4819 | 19.8555 |
| No log | 4.0 | 292 | 2.0391 | 27.2786 | 8.283 | 24.3314 | 24.3751 |
| No log | 5.0 | 365 | 2.0367 | 26.3524 | 7.6263 | 23.9034 | 23.8929 |
| No log | 6.0 | 438 | 2.0270 | 26.3718 | 6.7074 | 22.995 | 23.0177 |
| 1.3439 | 7.0 | 511 | 2.0106 | 27.8601 | 10.5485 | 26.8103 | 26.4962 |
| 1.3439 | 8.0 | 584 | 2.0292 | 27.1811 | 7.1941 | 23.9117 | 24.0093 |
| 1.3439 | 9.0 | 657 | 2.0462 | 25.6595 | 8.3529 | 23.0955 | 23.1946 |
| 1.3439 | 10.0 | 730 | 2.0600 | 27.1996 | 9.0098 | 25.7921 | 25.8295 |
| 1.3439 | 11.0 | 803 | 2.0754 | 25.3094 | 7.6857 | 23.5524 | 23.6875 |
| 1.3439 | 12.0 | 876 | 2.0532 | 27.2136 | 9.0147 | 24.7405 | 24.8211 |
| 1.3439 | 13.0 | 949 | 2.0742 | 26.298 | 8.6826 | 24.6878 | 24.9118 |
| 0.8957 | 14.0 | 1022 | 2.0975 | 22.9575 | 4.2021 | 20.6208 | 20.6539 |
| 0.8957 | 15.0 | 1095 | 2.0941 | 26.778 | 7.1756 | 24.4053 | 24.4951 |
| 0.8957 | 16.0 | 1168 | 2.1025 | 28.9102 | 10.5549 | 25.912 | 25.9433 |
| 0.8957 | 17.0 | 1241 | 2.1265 | 27.8301 | 9.7377 | 25.3236 | 25.3889 |
| 0.8957 | 18.0 | 1314 | 2.1403 | 26.1619 | 7.8019 | 23.5346 | 23.351 |
| 0.8957 | 19.0 | 1387 | 2.1396 | 26.664 | 6.8261 | 24.2991 | 24.328 |
| 0.8957 | 20.0 | 1460 | 2.1481 | 29.8898 | 9.8211 | 27.0922 | 27.2485 |
| 0.69 | 21.0 | 1533 | 2.1466 | 26.3418 | 5.7845 | 24.0772 | 24.3122 |
| 0.69 | 22.0 | 1606 | 2.1559 | 27.5789 | 7.7653 | 25.9896 | 25.8088 |
| 0.69 | 23.0 | 1679 | 2.1624 | 27.9455 | 7.4094 | 25.3163 | 25.3905 |
| 0.69 | 24.0 | 1752 | 2.1633 | 27.5236 | 8.1967 | 24.9498 | 24.974 |
| 0.69 | 25.0 | 1825 | 2.1698 | 26.899 | 6.4382 | 24.2075 | 24.1523 |
| 0.69 | 26.0 | 1898 | 2.1745 | 28.7721 | 8.872 | 24.8299 | 24.9028 |
| 0.69 | 27.0 | 1971 | 2.1818 | 25.8046 | 6.0655 | 23.156 | 23.1971 |
| 0.5965 | 28.0 | 2044 | 2.1854 | 25.4431 | 4.6566 | 22.2794 | 22.4561 |
| 0.5965 | 29.0 | 2117 | 2.1858 | 24.7881 | 6.4357 | 22.8869 | 22.8331 |
| 0.5965 | 30.0 | 2190 | 2.1877 | 27.9067 | 6.8779 | 24.6502 | 24.7749 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huggingtweets/elonmusk-lexfridman | huggingtweets | 2022-11-29T01:35:11Z | 118 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/956331551435960322/OaqR8pAB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Lex Fridman</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-lexfridman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Lex Fridman.
| Data | Elon Musk | Lex Fridman |
| --- | --- | --- |
| Tweets downloaded | 3198 | 2410 |
| Retweets | 126 | 253 |
| Short tweets | 968 | 49 |
| Tweets kept | 2104 | 2108 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18nt3c0k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-lexfridman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ozchvjo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ozchvjo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-lexfridman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
matan-diamond/sd-class-butterflies-32 | matan-diamond | 2022-11-29T00:47:21Z | 36 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-29T00:46:35Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(matan-diamond/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
adrien-alloreview/whisper-small-fr | adrien-alloreview | 2022-11-29T00:13:29Z | 83 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-11-28T22:32:23Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2226
- eval_wer: 10.0023
- eval_runtime: 65.2041
- eval_samples_per_second: 1.748
- eval_steps_per_second: 0.23
- epoch: 19.51
- step: 800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Serhio/sd-fine-tune-v2 | Serhio | 2022-11-28T23:43:18Z | 34 | 0 | diffusers | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-11-28T23:41:46Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### sd-fine-tune-v2 on Stable Diffusion via Dreambooth
#### model by Serhio
This your the Stable Diffusion model fine-tuned the sd-fine-tune-v2 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **Bashkov Sergey**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
|
jqtrde/sd-class-butterflies-32 | jqtrde | 2022-11-28T23:20:18Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"region:us"
]
| unconditional-image-generation | 2022-11-28T23:18:49Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(jqtrde/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
Pramodith/sd-class-butterflies-32 | Pramodith | 2022-11-28T23:19:08Z | 38 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T23:18:35Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(Pramodith/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
kanixwang/my-awesome-setfit-model | kanixwang | 2022-11-28T22:19:56Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-11-28T22:02:13Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
alryan1478/gpt-neo-125M-DOD-LOW | alryan1478 | 2022-11-28T22:19:47Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-11-28T21:59:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-DOD-LOW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-DOD-LOW
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 261 | 6.4768 |
| 6.8863 | 2.0 | 522 | 6.1056 |
| 6.8863 | 3.0 | 783 | 6.0427 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
ThomasSimonini/ML-Agents-SnowballFight-1vs1-model | ThomasSimonini | 2022-11-28T22:07:31Z | 6 | 0 | ml-agents | [
"ml-agents",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Snowballfight-1vs1",
"region:us"
]
| reinforcement-learning | 2022-11-28T21:26:07Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Snowballfight-1vs1
library_name: ml-agents
--- |
michaelmayo704/sd-class-butterflies-64 | michaelmayo704 | 2022-11-28T21:39:43Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T21:38:51Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(michaelmayo704/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
SiriRRR/test-model | SiriRRR | 2022-11-28T21:39:02Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-11-28T21:38:42Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: test-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# test-model
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
rlarios/distilbert-base-uncased-finetuned-emotion | rlarios | 2022-11-28T21:34:34Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-25T20:15:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9325
- name: F1
type: f1
value: 0.9322428116765227
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.9325
- F1: 0.9322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8372 | 1.0 | 250 | 0.3225 | 0.9045 | 0.9017 |
| 0.2534 | 2.0 | 500 | 0.2225 | 0.9325 | 0.9322 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
anikethjr/PromoGen_K562_2080Ti_restart | anikethjr | 2022-11-28T21:24:36Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"prophetnet",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-11-27T05:27:24Z | ---
tags:
- generated_from_trainer
model-index:
- name: PromoGen_K562_2080Ti_restart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PromoGen_K562_2080Ti_restart
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.7676 | 0.49 | 2500 | 0.7383 |
| 0.7121 | 0.97 | 5000 | 0.6867 |
| 0.6914 | 1.46 | 7500 | 0.6705 |
| 0.6837 | 1.95 | 10000 | 0.6622 |
| 0.6778 | 2.44 | 12500 | 0.6558 |
| 0.6748 | 2.92 | 15000 | 0.6517 |
| 0.6676 | 3.41 | 17500 | 0.6433 |
| 0.6593 | 3.9 | 20000 | 0.6358 |
| 0.6584 | 4.38 | 22500 | 0.6320 |
| 0.6557 | 4.87 | 25000 | 0.6301 |
| 0.6523 | 5.36 | 27500 | 0.6257 |
| 0.6478 | 5.84 | 30000 | 0.6236 |
| 0.6393 | 6.33 | 32500 | 0.6145 |
| 0.6039 | 6.82 | 35000 | 0.5658 |
| 0.5616 | 7.31 | 37500 | 0.5376 |
| 0.5518 | 7.79 | 40000 | 0.5310 |
| 0.5509 | 8.28 | 42500 | 0.5273 |
| 0.5487 | 8.77 | 45000 | 0.5261 |
| 0.5479 | 9.25 | 47500 | 0.5249 |
| 0.546 | 9.74 | 50000 | 0.5242 |
| 0.5447 | 10.23 | 52500 | 0.5229 |
| 0.5439 | 10.71 | 55000 | 0.5220 |
| 0.5433 | 11.2 | 57500 | 0.5209 |
| 0.5394 | 11.69 | 60000 | 0.5162 |
| 0.5153 | 12.18 | 62500 | 0.4944 |
| 0.5137 | 12.66 | 65000 | 0.4932 |
| 0.514 | 13.15 | 67500 | 0.4924 |
| 0.5131 | 13.64 | 70000 | 0.4919 |
| 0.5104 | 14.12 | 72500 | 0.4914 |
| 0.5122 | 14.61 | 75000 | 0.4906 |
| 0.5089 | 15.1 | 77500 | 0.4901 |
| 0.5076 | 15.59 | 80000 | 0.4891 |
| 0.4986 | 16.07 | 82500 | 0.4721 |
| 0.4875 | 16.56 | 85000 | 0.4672 |
| 0.4887 | 17.05 | 87500 | 0.4669 |
| 0.4839 | 17.53 | 90000 | 0.4661 |
| 0.4849 | 18.02 | 92500 | 0.4654 |
| 0.4848 | 18.51 | 95000 | 0.4649 |
| 0.4831 | 18.99 | 97500 | 0.4646 |
| 0.4816 | 19.48 | 100000 | 0.4644 |
| 0.4808 | 19.97 | 102500 | 0.4637 |
| 0.4812 | 20.46 | 105000 | 0.4634 |
| 0.4813 | 20.94 | 107500 | 0.4633 |
| 0.4818 | 21.43 | 110000 | 0.4631 |
| 0.4813 | 21.92 | 112500 | 0.4629 |
| 0.4782 | 22.4 | 115000 | 0.4628 |
| 0.4804 | 22.89 | 117500 | 0.4626 |
| 0.4815 | 23.38 | 120000 | 0.4625 |
| 0.4812 | 23.87 | 122500 | 0.4625 |
| 0.4785 | 24.35 | 125000 | 0.4624 |
| 0.4795 | 24.84 | 127500 | 0.4624 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.0
- Tokenizers 0.13.0.dev0
|
pig4431/TUF_BERT_5E | pig4431 | 2022-11-28T21:13:00Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-28T21:06:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_BERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3251
- Accuracy: 0.9467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4078 | 0.1 | 50 | 0.2430 | 0.92 |
| 0.2488 | 0.2 | 100 | 0.1465 | 0.94 |
| 0.1966 | 0.3 | 150 | 0.1284 | 0.96 |
| 0.2096 | 0.4 | 200 | 0.2879 | 0.9067 |
| 0.2015 | 0.5 | 250 | 0.1629 | 0.9467 |
| 0.1692 | 0.59 | 300 | 0.2165 | 0.9133 |
| 0.1794 | 0.69 | 350 | 0.1535 | 0.9533 |
| 0.1975 | 0.79 | 400 | 0.1429 | 0.9333 |
| 0.1394 | 0.89 | 450 | 0.2384 | 0.92 |
| 0.191 | 0.99 | 500 | 0.2198 | 0.94 |
| 0.0907 | 1.09 | 550 | 0.1270 | 0.9467 |
| 0.073 | 1.19 | 600 | 0.2016 | 0.94 |
| 0.1594 | 1.29 | 650 | 0.2078 | 0.9267 |
| 0.087 | 1.39 | 700 | 0.3312 | 0.9333 |
| 0.0961 | 1.49 | 750 | 0.3704 | 0.92 |
| 0.1225 | 1.58 | 800 | 0.1686 | 0.9467 |
| 0.0969 | 1.68 | 850 | 0.1525 | 0.9333 |
| 0.0942 | 1.78 | 900 | 0.1924 | 0.94 |
| 0.0681 | 1.88 | 950 | 0.1825 | 0.9467 |
| 0.1295 | 1.98 | 1000 | 0.1360 | 0.9333 |
| 0.0626 | 2.08 | 1050 | 0.2014 | 0.94 |
| 0.0372 | 2.18 | 1100 | 0.2030 | 0.9467 |
| 0.0077 | 2.28 | 1150 | 0.2615 | 0.9467 |
| 0.0393 | 2.38 | 1200 | 0.4256 | 0.9267 |
| 0.0492 | 2.48 | 1250 | 0.3057 | 0.94 |
| 0.0184 | 2.57 | 1300 | 0.1308 | 0.9733 |
| 0.0209 | 2.67 | 1350 | 0.2848 | 0.9467 |
| 0.0328 | 2.77 | 1400 | 0.1862 | 0.96 |
| 0.0333 | 2.87 | 1450 | 0.2347 | 0.96 |
| 0.0527 | 2.97 | 1500 | 0.3855 | 0.9333 |
| 0.0685 | 3.07 | 1550 | 0.3174 | 0.94 |
| 0.0217 | 3.17 | 1600 | 0.2320 | 0.9533 |
| 0.0036 | 3.27 | 1650 | 0.3219 | 0.9333 |
| 0.0015 | 3.37 | 1700 | 0.1649 | 0.9733 |
| 0.0177 | 3.47 | 1750 | 0.3785 | 0.94 |
| 0.0142 | 3.56 | 1800 | 0.1420 | 0.9733 |
| 0.0319 | 3.66 | 1850 | 0.4057 | 0.9333 |
| 0.0254 | 3.76 | 1900 | 0.1824 | 0.96 |
| 0.0092 | 3.86 | 1950 | 0.2400 | 0.9533 |
| 0.0306 | 3.96 | 2000 | 0.2238 | 0.96 |
| 0.0118 | 4.06 | 2050 | 0.2623 | 0.9533 |
| 0.0097 | 4.16 | 2100 | 0.3642 | 0.9467 |
| 0.0132 | 4.26 | 2150 | 0.3235 | 0.9467 |
| 0.0155 | 4.36 | 2200 | 0.3535 | 0.9467 |
| 0.0043 | 4.46 | 2250 | 0.3236 | 0.9467 |
| 0.0004 | 4.55 | 2300 | 0.2984 | 0.9467 |
| 0.009 | 4.65 | 2350 | 0.2941 | 0.9467 |
| 0.0068 | 4.75 | 2400 | 0.2936 | 0.9467 |
| 0.0102 | 4.85 | 2450 | 0.3138 | 0.9467 |
| 0.0015 | 4.95 | 2500 | 0.3251 | 0.9467 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
rmartinshort/sd-class-butterflies-64 | rmartinshort | 2022-11-28T20:32:13Z | 36 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T20:31:54Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(rmartinshort/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
CyantifiCQ/noisy_butterflied_diffusion | CyantifiCQ | 2022-11-28T20:23:45Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T20:22:34Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(CyantifiCQ/noisy_butterflied_diffusion)
image = pipeline().images[0]
image
```
|
pig4431/TUF_DistilBERT_5E | pig4431 | 2022-11-28T20:13:46Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-28T20:05:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_DistilBERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_DistilBERT_5E
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1832
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5092 | 0.1 | 50 | 0.4385 | 0.7533 |
| 0.2807 | 0.2 | 100 | 0.2225 | 0.9 |
| 0.1881 | 0.3 | 150 | 0.1531 | 0.94 |
| 0.1895 | 0.4 | 200 | 0.1426 | 0.94 |
| 0.1995 | 0.5 | 250 | 0.1428 | 0.94 |
| 0.1745 | 0.59 | 300 | 0.1538 | 0.9267 |
| 0.1679 | 0.69 | 350 | 0.1249 | 0.9533 |
| 0.199 | 0.79 | 400 | 0.1327 | 0.9467 |
| 0.1703 | 0.89 | 450 | 0.1488 | 0.92 |
| 0.1541 | 0.99 | 500 | 0.1772 | 0.9467 |
| 0.1436 | 1.09 | 550 | 0.1070 | 0.9667 |
| 0.1463 | 1.19 | 600 | 0.1165 | 0.9467 |
| 0.1309 | 1.29 | 650 | 0.1054 | 0.9733 |
| 0.097 | 1.39 | 700 | 0.1346 | 0.94 |
| 0.1307 | 1.49 | 750 | 0.1477 | 0.9467 |
| 0.1506 | 1.58 | 800 | 0.1311 | 0.9533 |
| 0.1386 | 1.68 | 850 | 0.1165 | 0.9667 |
| 0.1463 | 1.78 | 900 | 0.4207 | 0.9067 |
| 0.1202 | 1.88 | 950 | 0.1528 | 0.9667 |
| 0.1403 | 1.98 | 1000 | 0.1262 | 0.96 |
| 0.073 | 2.08 | 1050 | 0.1459 | 0.96 |
| 0.0713 | 2.18 | 1100 | 0.1747 | 0.9533 |
| 0.0814 | 2.28 | 1150 | 0.1953 | 0.9667 |
| 0.0935 | 2.38 | 1200 | 0.1888 | 0.9533 |
| 0.0685 | 2.48 | 1250 | 0.1562 | 0.9467 |
| 0.1154 | 2.57 | 1300 | 0.1806 | 0.96 |
| 0.1239 | 2.67 | 1350 | 0.1322 | 0.9533 |
| 0.1011 | 2.77 | 1400 | 0.2148 | 0.94 |
| 0.0718 | 2.87 | 1450 | 0.1686 | 0.96 |
| 0.1159 | 2.97 | 1500 | 0.1532 | 0.9533 |
| 0.0516 | 3.07 | 1550 | 0.1888 | 0.96 |
| 0.063 | 3.17 | 1600 | 0.1851 | 0.9467 |
| 0.068 | 3.27 | 1650 | 0.2775 | 0.94 |
| 0.0946 | 3.37 | 1700 | 0.1853 | 0.96 |
| 0.0606 | 3.47 | 1750 | 0.2148 | 0.9467 |
| 0.0663 | 3.56 | 1800 | 0.2091 | 0.9533 |
| 0.0474 | 3.66 | 1850 | 0.1702 | 0.9533 |
| 0.0585 | 3.76 | 1900 | 0.1660 | 0.96 |
| 0.0439 | 3.86 | 1950 | 0.2220 | 0.9533 |
| 0.0758 | 3.96 | 2000 | 0.1834 | 0.96 |
| 0.0497 | 4.06 | 2050 | 0.1707 | 0.9533 |
| 0.0412 | 4.16 | 2100 | 0.1948 | 0.9533 |
| 0.0338 | 4.26 | 2150 | 0.2039 | 0.9533 |
| 0.0796 | 4.36 | 2200 | 0.1797 | 0.9533 |
| 0.0727 | 4.46 | 2250 | 0.1986 | 0.9533 |
| 0.032 | 4.55 | 2300 | 0.1947 | 0.9467 |
| 0.0436 | 4.65 | 2350 | 0.1908 | 0.9467 |
| 0.0205 | 4.75 | 2400 | 0.1806 | 0.96 |
| 0.0326 | 4.85 | 2450 | 0.1835 | 0.96 |
| 0.0404 | 4.95 | 2500 | 0.1832 | 0.96 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
motmono/a2c-AntBulletEnv-v0 | motmono | 2022-11-28T19:58:24Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-11-28T19:57:12Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1539.68 +/- 213.96
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
UKP-SQuARE/tweac_16 | UKP-SQuARE | 2022-11-28T19:43:48Z | 102 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"QA",
"en",
"dataset:BoolQ",
"dataset:CommonSenseQA",
"dataset:DROP",
"dataset:DuoRC",
"dataset:HellaSWAG",
"dataset:HotpotQA",
"dataset:HybridQA",
"dataset:NarrativeQA",
"dataset:NaturalQuestionsShort",
"dataset:NewsQA",
"dataset:QAMR",
"dataset:RACE",
"dataset:SearchQA",
"dataset:SIQA",
"dataset:SQuAD",
"dataset:TriviaQA-web",
"arxiv:2104.07081",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-09T18:34:07Z | ---
language:
- en
tags:
- QA
license: cc-by-4.0
datasets:
- BoolQ
- CommonSenseQA
- DROP
- DuoRC
- HellaSWAG
- HotpotQA
- HybridQA
- NarrativeQA
- NaturalQuestionsShort
- NewsQA
- QAMR
- RACE
- SearchQA
- SIQA
- SQuAD
- TriviaQA-web
metrics:
- Accuracy
- Precision
- Recall
- F1
- MRR
- R@3
- R@5
---
BERT for Sequence Classification trained on QA Dataset prediction task.
- Input: question.
- Output: dataset from where that question comes from.
Original paper: TWEAC: Transformer with Extendable QA Agent Classifiers
https://arxiv.org/abs/2104.07081
Datasets used for training:
```
list_datasets = ['BoolQ','CommonSenseQA','DROP','DuoRC','HellaSWAG','HotpotQA','HybridQA','NarrativeQA','NaturalQuestionsShort','NewsQA','QAMR','RACE','SearchQA','SIQA','SQuAD','TriviaQA-web']
```
Results for all datasets:
- Accuracy: 0.7919096825783123
- Precision: 0.731586272892176
- Recall: 0.7919096825783123
- F1: 0.7494425609552463
- MRR: 0.8720871733637521
- R@3: 0.9438690810655046
- R@5: 0.9745318608004427
- Queries/second: 6052.33538824659
Results per dataset:
```
"BoolQ": {
"accuracy": 0.998776758409786,
"mrr": 0.999388379204893,
"r@3": 1.0,
"r@5": 1.0,
"query_per_second": 6978.947907596168,
"precision": 0.8649364406779662,
"recall": 0.998776758409786,
"f1": 0.9270508089696281
},
"CommonSenseQA": {
"accuracy": 0.9247135842880524,
"mrr": 0.9476358338878795,
"r@3": 0.9705400981996727,
"r@5": 0.9705400981996727,
"query_per_second": 5823.984138936813,
"precision": 0.442443226311668,
"recall": 0.9247135842880524,
"f1": 0.5985169491525425
},
"DROP": {
"accuracy": 0.9075083892617449,
"mrr": 0.9378200367399193,
"r@3": 0.9609899328859061,
"r@5": 0.9786073825503355,
"query_per_second": 6440.988897129248,
"precision": 0.8636726546906187,
"recall": 0.9075083892617449,
"f1": 0.8850480670893842
},
"DuoRC": {
"accuracy": 0.5555803405457654,
"mrr": 0.7368963429107307,
"r@3": 0.9092125808610305,
"r@5": 0.9596996059186557,
"query_per_second": 6853.643198794893,
"precision": 0.646814404432133,
"recall": 0.5555803405457654,
"f1": 0.5977360905563778
},
"HellaSWAG": {
"accuracy": 0.998406691894045,
"mrr": 0.9990705702715262,
"r@3": 1.0,
"r@5": 1.0,
"query_per_second": 3091.5012960785157,
"precision": 0.9974134500596896,
"recall": 0.998406691894045,
"f1": 0.9979098238280083
},
"HotpotQA": {
"accuracy": 0.7414435784479837,
"mrr": 0.8435804344945315,
"r@3": 0.9325652321247034,
"r@5": 0.973568281938326,
"query_per_second": 4972.668019223381,
"precision": 0.7352150537634409,
"recall": 0.7414435784479837,
"f1": 0.7383161801923401
},
"HybridQA": {
"accuracy": 0.7934218118869013,
"mrr": 0.8806947764680021,
"r@3": 0.964800923254472,
"r@5": 0.9930755914598961,
"query_per_second": 4886.494046259562,
"precision": 0.7198952879581152,
"recall": 0.7934218118869013,
"f1": 0.7548723579467472
},
"NarrativeQA": {
"accuracy": 0.5623756749076442,
"mrr": 0.7416681781060867,
"r@3": 0.9011082693947144,
"r@5": 0.9580373212086767,
"query_per_second": 7081.067049796865,
"precision": 0.5623224095472628,
"recall": 0.5623756749076442,
"f1": 0.5623490409661377
},
"NaturalQuestionsShort": {
"accuracy": 0.7985353692739171,
"mrr": 0.8743599435345307,
"r@3": 0.9439077594266126,
"r@5": 0.9774072919912745,
"query_per_second": 7136.590426649795,
"precision": 0.7963020509633313,
"recall": 0.7985353692739171,
"f1": 0.7974171464135678
},
"NewsQA": {
"accuracy": 0.5375118708452041,
"mrr": 0.71192075967717,
"r@3": 0.855650522317189,
"r@5": 0.939696106362773,
"query_per_second": 7193.851409052092,
"precision": 0.18757249378624688,
"recall": 0.5375118708452041,
"f1": 0.2780985136961061
},
"QAMR": {
"accuracy": 0.6658497602557272,
"mrr": 0.7969741223377345,
"r@3": 0.9207778369738945,
"r@5": 0.973361747469366,
"query_per_second": 7321.775044800525,
"precision": 0.8654525309881587,
"recall": 0.6658497602557272,
"f1": 0.7526421968624852
},
"RACE": {
"accuracy": 0.8771538617474154,
"mrr": 0.917901778042666,
"r@3": 0.9489154672613015,
"r@5": 0.9693898236367322,
"query_per_second": 6952.225120744351,
"precision": 0.8767983789260385,
"recall": 0.8771538617474154,
"f1": 0.8769760843129306
},
"SearchQA": {
"accuracy": 0.9762073027090695,
"mrr": 0.9865069592101393,
"r@3": 0.9972909305064782,
"r@5": 0.9984687868080094,
"query_per_second": 4031.0193826035634,
"precision": 0.9870191735143503,
"recall": 0.9762073027090695,
"f1": 0.9815834665719192
},
"SIQA": {
"accuracy": 0.9969293756397134,
"mrr": 0.9977823268509042,
"r@3": 0.9979529170931423,
"r@5": 1.0,
"query_per_second": 6711.547709005977,
"precision": 0.9329501915708812,
"recall": 0.9969293756397134,
"f1": 0.9638792676892627
},
"SQuAD": {
"accuracy": 0.550628092881614,
"mrr": 0.7164538452390565,
"r@3": 0.8660068519223448,
"r@5": 0.9366197183098591,
"query_per_second": 7033.420124363291,
"precision": 0.48613678373382624,
"recall": 0.550628092881614,
"f1": 0.5163766175814368
},
"TriviaQA-web": {
"accuracy": 0.7855124582584125,
"mrr": 0.8647404868442627,
"r@3": 0.9321859748266119,
"r@5": 0.9640380169535063,
"query_per_second": 4327.642440910395,
"precision": 0.7404358353510896,
"recall": 0.7855124582584125,
"f1": 0.7623083634550667
},
``` |
essayproj/roberta-base-essay | essayproj | 2022-11-28T19:08:54Z | 59 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"feature-extraction",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-11-28T19:08:03Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: roberta-base-essay
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# roberta-base-essay
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
|
Akriel/sd-class-butterflies-32 | Akriel | 2022-11-28T18:57:17Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T18:56:58Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(Akriel/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
Dagar/t5-small-science-papers-NIPS | Dagar | 2022-11-28T18:21:27Z | 107 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-11-28T18:00:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-science-papers-NIPS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-science-papers-NIPS
This model is a fine-tuned version of [Dagar/t5-small-science-papers](https://huggingface.co/Dagar/t5-small-science-papers) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7566
- Rouge1: 15.7066
- Rouge2: 2.5654
- Rougel: 11.4679
- Rougelsum: 14.4017
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 318 | 5.1856 | 13.7172 | 2.0644 | 10.2189 | 12.838 | 19.0 |
| 5.4522 | 2.0 | 636 | 5.0383 | 15.6211 | 2.1808 | 11.3561 | 14.3054 | 19.0 |
| 5.4522 | 3.0 | 954 | 4.9486 | 15.1659 | 2.3308 | 11.1052 | 13.9456 | 19.0 |
| 5.1254 | 4.0 | 1272 | 4.8851 | 15.716 | 2.4099 | 11.4954 | 14.5099 | 19.0 |
| 4.9794 | 5.0 | 1590 | 4.8456 | 15.5507 | 2.4267 | 11.3867 | 14.3237 | 19.0 |
| 4.9794 | 6.0 | 1908 | 4.8073 | 15.8406 | 2.4254 | 11.6878 | 14.6154 | 19.0 |
| 4.8823 | 7.0 | 2226 | 4.7872 | 15.5554 | 2.4637 | 11.3401 | 14.3183 | 19.0 |
| 4.8338 | 8.0 | 2544 | 4.7680 | 15.4783 | 2.4888 | 11.3364 | 14.2031 | 19.0 |
| 4.8338 | 9.0 | 2862 | 4.7621 | 15.958 | 2.5662 | 11.6139 | 14.6576 | 19.0 |
| 4.7838 | 10.0 | 3180 | 4.7566 | 15.7066 | 2.5654 | 11.4679 | 14.4017 | 19.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
FrancoisDongier/sd-class-butterflies-32 | FrancoisDongier | 2022-11-28T18:19:31Z | 34 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T18:16:21Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(FrancoisDongier/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
kejian/final-filter-again | kejian | 2022-11-28T17:39:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2022-11-28T01:33:32Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: kejian/final-filter-again
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/final-filter-again
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'filter_threshold': 0.002361,
'is_split_by_sentences': True},
'generation': {'batch_size': 64,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/final-filter-again',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 5000,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/25z4zfy3 |
alexziweiwang/retrain_epoch2and3 | alexziweiwang | 2022-11-28T17:31:08Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2022-11-28T17:14:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: retrain_epoch2and3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain_epoch2and3
This model is a fine-tuned version of [alexziweiwang/retrain_first1epoch](https://huggingface.co/alexziweiwang/retrain_first1epoch) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4888
- Acc: 0.24
- Wer: 1.0
- Correct: 48
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:----:|:---:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 7.8479 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.6019 | 0.04 | 10 | 7.4765 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.6019 | 0.06 | 15 | 7.1196 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.3222 | 0.08 | 20 | 6.8029 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.3222 | 0.11 | 25 | 6.5210 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.2645 | 0.13 | 30 | 6.2630 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.2645 | 0.15 | 35 | 6.0213 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.8699 | 0.17 | 40 | 5.8096 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.8699 | 0.19 | 45 | 5.5831 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.7145 | 0.21 | 50 | 5.3644 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.7145 | 0.23 | 55 | 5.1777 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.3702 | 0.25 | 60 | 5.0257 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.3702 | 0.27 | 65 | 4.8642 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.1896 | 0.3 | 70 | 4.7205 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.1896 | 0.32 | 75 | 4.5846 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.0615 | 0.34 | 80 | 4.4313 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.0615 | 0.36 | 85 | 4.2923 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.5189 | 0.38 | 90 | 4.1662 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.5189 | 0.4 | 95 | 4.0545 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.4911 | 0.42 | 100 | 3.9585 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.4911 | 0.44 | 105 | 3.8489 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1997 | 0.46 | 110 | 3.7573 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1997 | 0.48 | 115 | 3.6722 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7348 | 0.51 | 120 | 3.5844 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7348 | 0.53 | 125 | 3.4980 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8042 | 0.55 | 130 | 3.4318 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8042 | 0.57 | 135 | 3.3690 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.705 | 0.59 | 140 | 3.3126 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.705 | 0.61 | 145 | 3.2630 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.763 | 0.63 | 150 | 3.2063 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.763 | 0.65 | 155 | 3.1562 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.5585 | 0.67 | 160 | 3.1096 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.5585 | 0.7 | 165 | 3.0719 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.213 | 0.72 | 170 | 3.0373 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.213 | 0.74 | 175 | 3.0035 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2874 | 0.76 | 180 | 2.9712 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2874 | 0.78 | 185 | 2.9405 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.3327 | 0.8 | 190 | 2.9134 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.3327 | 0.82 | 195 | 2.8910 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2382 | 0.84 | 200 | 2.8672 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2382 | 0.86 | 205 | 2.8462 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.0069 | 0.89 | 210 | 2.8260 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.0069 | 0.91 | 215 | 2.8087 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2288 | 0.93 | 220 | 2.7920 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2288 | 0.95 | 225 | 2.7750 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.787 | 0.97 | 230 | 2.7557 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.787 | 0.99 | 235 | 2.7367 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.9717 | 1.01 | 240 | 2.7207 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.9717 | 1.03 | 245 | 2.7063 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.9269 | 1.05 | 250 | 2.6939 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.9269 | 1.08 | 255 | 2.6831 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8771 | 1.1 | 260 | 2.6709 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8771 | 1.12 | 265 | 2.6594 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.0474 | 1.14 | 270 | 2.6472 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.0474 | 1.16 | 275 | 2.6361 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7652 | 1.18 | 280 | 2.6268 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7652 | 1.2 | 285 | 2.6184 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8322 | 1.22 | 290 | 2.6106 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8322 | 1.24 | 295 | 2.6034 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6464 | 1.27 | 300 | 2.5957 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6464 | 1.29 | 305 | 2.5877 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7974 | 1.31 | 310 | 2.5805 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7974 | 1.33 | 315 | 2.5748 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.797 | 1.35 | 320 | 2.5698 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.797 | 1.37 | 325 | 2.5644 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7508 | 1.39 | 330 | 2.5595 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7508 | 1.41 | 335 | 2.5537 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7188 | 1.43 | 340 | 2.5486 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7188 | 1.46 | 345 | 2.5434 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6889 | 1.48 | 350 | 2.5377 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6889 | 1.5 | 355 | 2.5336 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6373 | 1.52 | 360 | 2.5300 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6373 | 1.54 | 365 | 2.5258 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.765 | 1.56 | 370 | 2.5219 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.765 | 1.58 | 375 | 2.5181 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6407 | 1.6 | 380 | 2.5144 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6407 | 1.62 | 385 | 2.5113 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7727 | 1.64 | 390 | 2.5093 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7727 | 1.67 | 395 | 2.5076 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8091 | 1.69 | 400 | 2.5060 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8091 | 1.71 | 405 | 2.5042 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7204 | 1.73 | 410 | 2.5027 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7204 | 1.75 | 415 | 2.5011 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6168 | 1.77 | 420 | 2.4987 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6168 | 1.79 | 425 | 2.4965 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6947 | 1.81 | 430 | 2.4947 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6947 | 1.83 | 435 | 2.4932 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7495 | 1.86 | 440 | 2.4921 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7495 | 1.88 | 445 | 2.4911 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7413 | 1.9 | 450 | 2.4904 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7413 | 1.92 | 455 | 2.4897 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6498 | 1.94 | 460 | 2.4893 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6498 | 1.96 | 465 | 2.4890 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6891 | 1.98 | 470 | 2.4888 | 0.24 | 1.0 | 48 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
wa3dbk/whisper-small-ar | wa3dbk | 2022-11-28T17:11:32Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-11-25T18:33:06Z |
## whisper-small-ar
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset (language=Arabic).
|
antgrutta/sd-class-butterflies-32 | antgrutta | 2022-11-28T16:59:10Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T16:58:32Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(antgrutta/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
EmnaBou/bert-finetuned-DT | EmnaBou | 2022-11-28T16:49:12Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-11-28T15:20:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-DT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-DT
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6697
- Precision: 0.2381
- Recall: 0.0321
- F1: 0.0565
- Accuracy: 0.8179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 99 | 0.7505 | 0.0 | 0.0 | 0.0 | 0.8196 |
| No log | 2.0 | 198 | 0.7033 | 0.0 | 0.0 | 0.0 | 0.8196 |
| No log | 3.0 | 297 | 0.6697 | 0.2381 | 0.0321 | 0.0565 | 0.8179 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
luisgasco/distilbert-base-uncased-finetuned-emotion | luisgasco | 2022-11-28T16:17:49Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-28T16:03:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.892
- name: F1
type: f1
value: 0.8873822002431591
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3693
- Accuracy: 0.892
- F1: 0.8874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5715 | 0.8275 | 0.8047 |
| 0.7552 | 2.0 | 250 | 0.3693 | 0.892 | 0.8874 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tomekkorbak/awesome_ride | tomekkorbak | 2022-11-28T16:12:40Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
]
| null | 2022-11-28T16:12:19Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: awesome_ride
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# awesome_ride
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00065,
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'awesome_ride',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/3m98rnwq |
alexziweiwang/pure-start-epoch2 | alexziweiwang | 2022-11-28T16:08:48Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2022-11-28T15:52:06Z | ---
tags:
- generated_from_trainer
model-index:
- name: pure-start-epoch2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pure-start-epoch2
This model is a fine-tuned version of [alexziweiwang/pure-start-epoch1](https://huggingface.co/alexziweiwang/pure-start-epoch1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.7447
- Acc: 0.24
- Wer: 1.0
- Correct: 48
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:---:|:-------:|:-----:|:------:|
| No log | 0.01 | 2 | 20.4002 | 0.095 | 1.0 | 19 | 200 | 200 |
| No log | 0.02 | 4 | 19.9080 | 0.095 | 1.0 | 19 | 200 | 200 |
| No log | 0.03 | 6 | 19.4711 | 0.095 | 1.0 | 19 | 200 | 200 |
| No log | 0.03 | 8 | 19.1535 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.04 | 10 | 18.6684 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.05 | 12 | 18.1640 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.06 | 14 | 17.6937 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.07 | 16 | 17.2710 | 0.095 | 1.0 | 19 | 200 | 200 |
| 46.6007 | 0.08 | 18 | 16.8469 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.08 | 20 | 16.4418 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.09 | 22 | 16.0409 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.1 | 24 | 15.6677 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.11 | 26 | 15.3291 | 0.095 | 1.0 | 19 | 200 | 200 |
| 49.1547 | 0.12 | 28 | 15.0097 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.13 | 30 | 14.6776 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.13 | 32 | 14.3788 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.14 | 34 | 14.0924 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.15 | 36 | 13.8133 | 0.095 | 1.0 | 19 | 200 | 200 |
| 35.1416 | 0.16 | 38 | 13.5539 | 0.095 | 1.0 | 19 | 200 | 200 |
| 34.4057 | 0.17 | 40 | 13.3095 | 0.095 | 1.0 | 19 | 200 | 200 |
| 34.4057 | 0.18 | 42 | 13.0804 | 0.095 | 1.0 | 19 | 200 | 200 |
| 34.4057 | 0.19 | 44 | 12.8580 | 0.105 | 1.0 | 21 | 200 | 200 |
| 34.4057 | 0.19 | 46 | 12.6532 | 0.115 | 1.0 | 23 | 200 | 200 |
| 34.4057 | 0.2 | 48 | 12.4532 | 0.13 | 1.0 | 26 | 200 | 200 |
| 33.2759 | 0.21 | 50 | 12.2452 | 0.14 | 1.0 | 28 | 200 | 200 |
| 33.2759 | 0.22 | 52 | 12.0666 | 0.13 | 1.0 | 26 | 200 | 200 |
| 33.2759 | 0.23 | 54 | 11.8976 | 0.165 | 1.0 | 33 | 200 | 200 |
| 33.2759 | 0.24 | 56 | 11.7373 | 0.175 | 1.0 | 35 | 200 | 200 |
| 33.2759 | 0.24 | 58 | 11.5933 | 0.17 | 1.0 | 34 | 200 | 200 |
| 29.8129 | 0.25 | 60 | 11.4281 | 0.15 | 1.0 | 30 | 200 | 200 |
| 29.8129 | 0.26 | 62 | 11.2665 | 0.14 | 1.0 | 28 | 200 | 200 |
| 29.8129 | 0.27 | 64 | 11.1158 | 0.145 | 1.0 | 29 | 200 | 200 |
| 29.8129 | 0.28 | 66 | 10.9840 | 0.135 | 1.0 | 27 | 200 | 200 |
| 29.8129 | 0.29 | 68 | 10.8502 | 0.15 | 1.0 | 30 | 200 | 200 |
| 38.792 | 0.3 | 70 | 10.7341 | 0.15 | 1.0 | 30 | 200 | 200 |
| 38.792 | 0.3 | 72 | 10.6082 | 0.165 | 1.0 | 33 | 200 | 200 |
| 38.792 | 0.31 | 74 | 10.4944 | 0.18 | 1.0 | 36 | 200 | 200 |
| 38.792 | 0.32 | 76 | 10.3818 | 0.21 | 1.0 | 42 | 200 | 200 |
| 38.792 | 0.33 | 78 | 10.2719 | 0.235 | 1.0 | 47 | 200 | 200 |
| 28.0092 | 0.34 | 80 | 10.1636 | 0.235 | 1.0 | 47 | 200 | 200 |
| 28.0092 | 0.35 | 82 | 10.0709 | 0.24 | 1.0 | 48 | 200 | 200 |
| 28.0092 | 0.35 | 84 | 9.9797 | 0.24 | 1.0 | 48 | 200 | 200 |
| 28.0092 | 0.36 | 86 | 9.8958 | 0.24 | 1.0 | 48 | 200 | 200 |
| 28.0092 | 0.37 | 88 | 9.7977 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.38 | 90 | 9.7015 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.39 | 92 | 9.6150 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.4 | 94 | 9.5304 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.4 | 96 | 9.4521 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.6175 | 0.41 | 98 | 9.3832 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.42 | 100 | 9.3148 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.43 | 102 | 9.2563 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.44 | 104 | 9.1944 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.45 | 106 | 9.1323 | 0.24 | 1.0 | 48 | 200 | 200 |
| 26.3434 | 0.46 | 108 | 9.0717 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.46 | 110 | 9.0245 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.47 | 112 | 8.9772 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.48 | 114 | 8.9390 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.49 | 116 | 8.9013 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.4387 | 0.5 | 118 | 8.8605 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.51 | 120 | 8.8126 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.51 | 122 | 8.7503 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.52 | 124 | 8.6921 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.53 | 126 | 8.6378 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.7305 | 0.54 | 128 | 8.5927 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.55 | 130 | 8.5520 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.56 | 132 | 8.5126 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.56 | 134 | 8.4743 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.57 | 136 | 8.4369 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.5989 | 0.58 | 138 | 8.3993 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.59 | 140 | 8.3636 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.6 | 142 | 8.3311 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.61 | 144 | 8.2983 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.62 | 146 | 8.2652 | 0.24 | 1.0 | 48 | 200 | 200 |
| 21.8372 | 0.62 | 148 | 8.2345 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.63 | 150 | 8.2064 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.64 | 152 | 8.1818 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.65 | 154 | 8.1603 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.66 | 156 | 8.1403 | 0.24 | 1.0 | 48 | 200 | 200 |
| 20.1716 | 0.67 | 158 | 8.1180 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.67 | 160 | 8.0997 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.68 | 162 | 8.0791 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.69 | 164 | 8.0563 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.7 | 166 | 8.0342 | 0.24 | 1.0 | 48 | 200 | 200 |
| 24.5655 | 0.71 | 168 | 8.0130 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.72 | 170 | 7.9936 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.72 | 172 | 7.9756 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.73 | 174 | 7.9594 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.74 | 176 | 7.9439 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.3768 | 0.75 | 178 | 7.9298 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.76 | 180 | 7.9157 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.77 | 182 | 7.9021 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.78 | 184 | 7.8899 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.78 | 186 | 7.8796 | 0.24 | 1.0 | 48 | 200 | 200 |
| 19.7473 | 0.79 | 188 | 7.8697 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.8 | 190 | 7.8598 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.81 | 192 | 7.8490 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.82 | 194 | 7.8390 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.83 | 196 | 7.8293 | 0.24 | 1.0 | 48 | 200 | 200 |
| 15.7279 | 0.83 | 198 | 7.8211 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.84 | 200 | 7.8135 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.85 | 202 | 7.8064 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.86 | 204 | 7.7991 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.87 | 206 | 7.7924 | 0.24 | 1.0 | 48 | 200 | 200 |
| 18.5034 | 0.88 | 208 | 7.7862 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.89 | 210 | 7.7803 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.89 | 212 | 7.7749 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.9 | 214 | 7.7701 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.91 | 216 | 7.7657 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.1983 | 0.92 | 218 | 7.7628 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.93 | 220 | 7.7595 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.94 | 222 | 7.7567 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.94 | 224 | 7.7541 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.95 | 226 | 7.7518 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.7276 | 0.96 | 228 | 7.7497 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.8692 | 0.97 | 230 | 7.7479 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.8692 | 0.98 | 232 | 7.7463 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.8692 | 0.99 | 234 | 7.7453 | 0.24 | 1.0 | 48 | 200 | 200 |
| 17.8692 | 0.99 | 236 | 7.7447 | 0.24 | 1.0 | 48 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-banking-2-2-1 | fathyshalab | 2022-11-28T15:28:40Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-28T15:27:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-2-2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-2-2-1
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6817
- Accuracy: 0.1022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.653 | 1.0 | 5 | 2.6817 | 0.1022 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
arrandi/sd-class-butterflies-32 | arrandi | 2022-11-28T15:24:36Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T15:23:56Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(arrandi/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
ConvLab/ddpt-policy-sgd_0.01multiwoz21 | ConvLab | 2022-11-28T15:24:29Z | 0 | 0 | null | [
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/sgd",
"license:apache-2.0",
"region:us"
]
| null | 2022-11-28T15:21:11Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/sgd
---
# ddpt-policy-sgd_0.01multiwoz21
This is a DDPT model (https://aclanthology.org/2022.coling-1.21/) trained on [Schema-Guided Dialog](https://huggingface.co/datasets/ConvLab/sgd) and afterwards on 1 percent of [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- seed: 0
- optimizer: Adam
- num_epochs: 40
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
ConvLab/ddpt-policy-0.01multiwoz21 | ConvLab | 2022-11-28T15:20:35Z | 0 | 0 | null | [
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/sgd",
"license:apache-2.0",
"region:us"
]
| null | 2022-11-28T15:18:28Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/sgd
---
# ddpt-policy-0.01multiwoz21
This is a DDPT model (https://aclanthology.org/2022.coling-1.21/) trained on 1 percent of [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- seed: 0
- optimizer: Adam
- num_epochs: 40
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
fathyshalab/all-roberta-large-v1-banking-1-2-1 | fathyshalab | 2022-11-28T15:12:05Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-28T15:10:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-1-2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-1-2-1
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6235
- Accuracy: 0.2578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6542 | 1.0 | 3 | 2.6235 | 0.2578 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ConvLab/mle-policy-multiwoz21 | ConvLab | 2022-11-28T15:11:19Z | 0 | 0 | null | [
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/multiwoz21",
"license:apache-2.0",
"region:us"
]
| null | 2022-11-28T15:07:50Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/multiwoz21
---
# mle-policy-multiwoz21
This is a MLE model trained on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- seed: 0
- optimizer: Adam
- num_epochs: 24
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
ConvLab/ddpt-policy-sgd | ConvLab | 2022-11-28T15:01:15Z | 0 | 1 | null | [
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/sgd",
"license:apache-2.0",
"region:us"
]
| null | 2022-11-28T13:21:09Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/sgd
---
# ddpt-policy-sgd
This is a DDPT model (https://aclanthology.org/2022.coling-1.21/) trained on [Schema-Guided Dialog](https://huggingface.co/datasets/ConvLab/sgd)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- seed: 0
- optimizer: Adam
- num_epochs: 1
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
alexziweiwang/pure-start-epoch1 | alexziweiwang | 2022-11-28T14:49:27Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2022-11-28T14:32:53Z | ---
tags:
- generated_from_trainer
model-index:
- name: pure-start-epoch1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pure-start-epoch1
This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 21.0050
- Acc: 0.095
- Wer: 1.0
- Correct: 19
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 67.2752 | 0.0 | 1.0119 | 0 | 200 | 200 |
| 131.0548 | 0.04 | 10 | 66.2796 | 0.0 | 1.0257 | 0 | 200 | 200 |
| 131.0548 | 0.06 | 15 | 65.2071 | 0.005 | 1.0237 | 1 | 200 | 200 |
| 145.0859 | 0.08 | 20 | 64.0987 | 0.035 | 1.0198 | 7 | 200 | 200 |
| 145.0859 | 0.11 | 25 | 62.9734 | 0.07 | 1.0119 | 14 | 200 | 200 |
| 110.0012 | 0.13 | 30 | 61.8288 | 0.09 | 1.0119 | 18 | 200 | 200 |
| 110.0012 | 0.15 | 35 | 60.6565 | 0.09 | 1.0119 | 18 | 200 | 200 |
| 122.6164 | 0.17 | 40 | 59.4606 | 0.095 | 1.0119 | 19 | 200 | 200 |
| 122.6164 | 0.19 | 45 | 58.2224 | 0.095 | 1.0099 | 19 | 200 | 200 |
| 125.942 | 0.21 | 50 | 56.9514 | 0.095 | 1.0020 | 19 | 200 | 200 |
| 125.942 | 0.23 | 55 | 55.5923 | 0.095 | 1.0 | 19 | 200 | 200 |
| 111.2271 | 0.25 | 60 | 54.1423 | 0.095 | 1.0 | 19 | 200 | 200 |
| 111.2271 | 0.27 | 65 | 52.6174 | 0.095 | 1.0 | 19 | 200 | 200 |
| 137.2356 | 0.3 | 70 | 51.0340 | 0.095 | 1.0 | 19 | 200 | 200 |
| 137.2356 | 0.32 | 75 | 49.4034 | 0.095 | 1.0 | 19 | 200 | 200 |
| 112.2532 | 0.34 | 80 | 47.7291 | 0.095 | 1.0 | 19 | 200 | 200 |
| 112.2532 | 0.36 | 85 | 46.0281 | 0.095 | 1.0 | 19 | 200 | 200 |
| 88.3973 | 0.38 | 90 | 44.2361 | 0.095 | 1.0 | 19 | 200 | 200 |
| 88.3973 | 0.4 | 95 | 42.4925 | 0.095 | 1.0 | 19 | 200 | 200 |
| 88.7175 | 0.42 | 100 | 40.7673 | 0.095 | 1.0 | 19 | 200 | 200 |
| 88.7175 | 0.44 | 105 | 39.0848 | 0.095 | 1.0 | 19 | 200 | 200 |
| 90.857 | 0.46 | 110 | 37.4890 | 0.095 | 1.0 | 19 | 200 | 200 |
| 90.857 | 0.48 | 115 | 35.8966 | 0.095 | 1.0 | 19 | 200 | 200 |
| 77.5782 | 0.51 | 120 | 34.2822 | 0.1 | 1.0 | 20 | 200 | 200 |
| 77.5782 | 0.53 | 125 | 32.7953 | 0.1 | 1.0 | 20 | 200 | 200 |
| 80.2378 | 0.55 | 130 | 31.4560 | 0.1 | 1.0 | 20 | 200 | 200 |
| 80.2378 | 0.57 | 135 | 30.1651 | 0.1 | 1.0 | 20 | 200 | 200 |
| 73.5042 | 0.59 | 140 | 29.0069 | 0.095 | 1.0 | 19 | 200 | 200 |
| 73.5042 | 0.61 | 145 | 28.0349 | 0.095 | 1.0 | 19 | 200 | 200 |
| 71.5632 | 0.63 | 150 | 27.1812 | 0.095 | 1.0 | 19 | 200 | 200 |
| 71.5632 | 0.65 | 155 | 26.4012 | 0.095 | 1.0 | 19 | 200 | 200 |
| 76.5337 | 0.67 | 160 | 25.6924 | 0.095 | 1.0 | 19 | 200 | 200 |
| 76.5337 | 0.7 | 165 | 25.0184 | 0.095 | 1.0 | 19 | 200 | 200 |
| 54.6507 | 0.72 | 170 | 24.4100 | 0.095 | 1.0 | 19 | 200 | 200 |
| 54.6507 | 0.74 | 175 | 23.8273 | 0.095 | 1.0 | 19 | 200 | 200 |
| 57.1606 | 0.76 | 180 | 23.2988 | 0.095 | 1.0 | 19 | 200 | 200 |
| 57.1606 | 0.78 | 185 | 22.8731 | 0.095 | 1.0 | 19 | 200 | 200 |
| 56.0855 | 0.8 | 190 | 22.5336 | 0.095 | 1.0 | 19 | 200 | 200 |
| 56.0855 | 0.82 | 195 | 22.2334 | 0.095 | 1.0 | 19 | 200 | 200 |
| 55.2475 | 0.84 | 200 | 21.9555 | 0.095 | 1.0 | 19 | 200 | 200 |
| 55.2475 | 0.86 | 205 | 21.7112 | 0.095 | 1.0 | 19 | 200 | 200 |
| 47.9988 | 0.89 | 210 | 21.5123 | 0.095 | 1.0 | 19 | 200 | 200 |
| 47.9988 | 0.91 | 215 | 21.3407 | 0.095 | 1.0 | 19 | 200 | 200 |
| 55.1394 | 0.93 | 220 | 21.1965 | 0.095 | 1.0 | 19 | 200 | 200 |
| 55.1394 | 0.95 | 225 | 21.1028 | 0.095 | 1.0 | 19 | 200 | 200 |
| 48.0323 | 0.97 | 230 | 21.0376 | 0.095 | 1.0 | 19 | 200 | 200 |
| 48.0323 | 0.99 | 235 | 21.0050 | 0.095 | 1.0 | 19 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Fabiuas/Animal-classifier | Fabiuas | 2022-11-28T14:38:27Z | 311 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-11-28T14:37:59Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Animal-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9481481313705444
---
# Animal-classifier
Autogenerated by HuggingPicsπ€πΌοΈ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bee

#### beetle

#### bird

#### butterfly

#### camel

#### cat

#### caterpillar

#### crab

#### dog

#### fly

#### grasshopper

#### horse

#### lizard

#### mosquito

#### mouse

#### snake

#### spider

#### whale
 |
regel-corpus/hunflair-tfbs | regel-corpus | 2022-11-28T14:37:52Z | 3 | 0 | flair | [
"flair",
"pytorch",
"hunflair",
"token-classification",
"sequence-tagger-model",
"en",
"region:us"
]
| token-classification | 2022-03-29T11:26:41Z | ---
tags:
- flair
- hunflair
- token-classification
- sequence-tagger-model
language: en
widget:
- text: "It contains a functional GCGGCGGCG Egr-1-binding site"
---
## HunFlair model for Transcription Factor Binding Site (TFBS)
[HunFlair](https://github.com/flairNLP/flair/blob/master/resources/docs/HUNFLAIR.md) (biomedical flair) for TFBS entity.
Predicts 1 tag:
| **tag** | **meaning** |
|---------------------------------|-----------|
| Tfbs | DNA region bound by transcription factor |
---
### Cite
Please cite the following paper when using this model.
```
@article{garda2022regel,
title={RegEl corpus: identifying DNA regulatory elements in the scientific literature},
author={Garda, Samuele and Lenihan-Geels, Freyda and Proft, Sebastian and Hochmuth, Stefanie and Sch{\"u}lke, Markus and Seelow, Dominik and Leser, Ulf},
journal={Database},
volume={2022},
year={2022},
publisher={Oxford Academic}
}
```
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# for biomedical-specific tokenization:
# from flair.tokenization import SciSpacyTokenizer
# load tagger
tagger = SequenceTagger.load("regel-corpus/hunflair-tfbs")
text = "We found that Egr-1 specifically binds to the PTEN 5' untranslated region, which contains a functional GCGGCGGCG Egr-1-binding site."
# make example sentence
sentence = Sentence(text)
# for biomedical-specific tokenization:
# sentence = Sentence(text, use_tokenizer=SciSpacyTokenizer())
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [19,20,21]: "GCGGCGGCG Egr-1-binding site" [β Labels: Tfbs (0.9631)]
```
So, the entity "*GCGGCGGCG Egr-1-binding site*" is found in the sentence.
Alternatively download all models locally and use the `MultiTagger` class.
```python
from flair.models import MultiTagger
tagger = [
'./models/hunflair-promoter/pytorch_model.bin',
'./models/hunflair-enhancer/pytorch_model.bin',
'./models/hunflair-tfbs/pytorch_model.bin',
]
tagger = MultiTagger.load(['./models/hunflair-'])
tagger.predict(sentence)
```
---
|
fathyshalab/bert-uncased-massive-intent-classification-finetuned-banking-1 | fathyshalab | 2022-11-28T13:25:04Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-28T13:01:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-massive-intent-classification-finetuned-banking-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-massive-intent-classification-finetuned-banking-1
This model is a fine-tuned version of [gokuls/bert-uncased-massive-intent-classification](https://huggingface.co/gokuls/bert-uncased-massive-intent-classification) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6447
- Accuracy: 0.1822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.9685 | 1.0 | 3 | 2.7310 | 0.1422 |
| 2.8056 | 2.0 | 6 | 2.6970 | 0.1467 |
| 2.5004 | 3.0 | 9 | 2.6680 | 0.1511 |
| 2.445 | 4.0 | 12 | 2.6515 | 0.1778 |
| 2.3977 | 5.0 | 15 | 2.6447 | 0.1822 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jfjensen/sd-class-butterflies-32 | jfjensen | 2022-11-28T12:59:41Z | 37 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2022-11-28T12:58:55Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(jfjensen/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
cardiffnlp/twitter-roberta-base-offensive | cardiffnlp | 2022-11-28T11:36:23Z | 35,866 | 27 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | # Twitter-roBERTa-base for Offensive Language Identification
This is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='offensive'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night π"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night π"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) not-offensive 0.9073
2) offensive 0.0927
```
|
clp/vit-base-patch16-224-finetuned | clp | 2022-11-28T11:29:17Z | 186 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-11-28T11:19:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.3333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7617
- Accuracy: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6063 | 0.6667 |
| No log | 2.0 | 2 | 0.6958 | 0.3333 |
| No log | 3.0 | 3 | 0.7617 | 0.3333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
projecte-aina/roberta-base-ca-v2-cased-tc | projecte-aina | 2022-11-28T11:02:09Z | 110 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"catalan",
"text classification",
"tecla",
"CaText",
"Catalan Textual Corpus",
"ca",
"dataset:projecte-aina/tecla",
"arxiv:1907.11692",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-06-30T07:55:23Z | ---
language:
- ca
tags:
- "catalan"
- "text classification"
- "tecla"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/tecla"
metrics:
- accuracy
model-index:
- name: roberta-base-ca-v2-cased-tc
results:
- task:
type: text-classification
dataset:
name: TeCla
type: projecte-aina/tecla
metrics:
- name: Accuracy
type: accuracy
value: 0.8034
widget:
- text: "Els Pets presenten el seu nou treball al Palau Sant Jordi."
- text: "Els barcelonins incrementen un 23% lβΓΊs del cotxe des de lβinici de la pandΓ¨mia."
- text: "Retards a quatre lΓnies de Rodalies per una avaria entre Sants i plaΓ§a de Catalunya."
- text: "Majors de 60 anys i sanitaris comenΓ§aran a rebre la tercera dosi de la vacuna covid els propers dies."
- text: "Els cinemes Verdi estrenen Verdi Classics, un nou canal de televisiΓ³."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for TeCla-based Text Classification.
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-ca-v2-cased-tc** is a Text Classification (TC) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
The previous version of this model, which was trained on the old TeCla dataset (v1), can still be accessed through the "v1" tag.
## Intended uses and limitations
**roberta-base-ca-v2-cased-tc** model can be used to classify texts. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="projecte-aina/roberta-base-ca-v2-cased-tc")
example = "Retards a quatre lΓnies de Rodalies per una avaria entre Sants i plaΓ§a de Catalunya."
tc_results = nlp(example)
pprint(tc_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the TC dataset in Catalan called [TeCla](https://huggingface.co/datasets/projecte-aina/tecla) for training and evaluation. Although TeCla includes a coarse-grained ('label1') and a fine-grained categorization ('label2'), only the last one, with 53 classes, was used for the training.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 (weighted).
## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-tc_ on the TeCla test set against standard multilingual and monolingual baselines. The results for 'label1' categories were obtained through a mapping from the fine-grained category ('label2') to the corresponding coarse-grained one ('label1').
| Model | TeCla - label1 (Accuracy) | TeCla - label2 (Accuracy) |
| ------------|:-------------|:-------------|
| roberta-base-ca-v2 | 96.31 | 80.34 |
| roberta-large-ca-v2 | **96.51** | **80.68** |
| mBERT | 95.72 | 78.47 |
| XLM-RoBERTa | 95.66 | 78.01 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to [email protected]
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la VicepresidΓ¨ncia i de PolΓtiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC β Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
GDJ1978/voxelartXmidjgraffiti | GDJ1978 | 2022-11-28T10:01:38Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-11-28T09:55:36Z | VoxelArt_v1_0.6-MDJRNY-GRFFT_0.4-Weighted_sum-merged.ckpt
trigger: VoxelArt in the style of mdjrny-grfft |
mn367/radio-mlm | mn367 | 2022-11-28T09:52:57Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-11-28T09:42:20Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mn367/radio-mlm
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mn367/radio-mlm
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.6630
- Validation Loss: 4.6014
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 39000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.6630 | 4.6014 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
vumichien/trillsson3-ft-keyword-spotting-15 | vumichien | 2022-11-28T09:46:32Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"trillsson_efficient",
"text-classification",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2022-11-28T08:17:45Z | ---
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: trillsson3-ft-keyword-spotting-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trillsson3-ft-keyword-spotting-15
This model is a fine-tuned version of [vumichien/nonsemantic-speech-trillsson3](https://huggingface.co/vumichien/nonsemantic-speech-trillsson3) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3563
- Accuracy: 0.9041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 64
- seed: 0
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1824 | 1.0 | 798 | 0.6478 | 0.7489 |
| 0.7448 | 2.0 | 1596 | 0.4274 | 0.8728 |
| 0.7089 | 3.0 | 2394 | 0.3723 | 0.8950 |
| 0.6781 | 4.0 | 3192 | 0.3563 | 0.9041 |
| 0.6386 | 5.0 | 3990 | 0.3441 | 0.8986 |
| 0.6342 | 6.0 | 4788 | 0.3380 | 0.8994 |
| 0.6275 | 7.0 | 5586 | 0.3376 | 0.8982 |
| 0.6349 | 8.0 | 6384 | 0.3333 | 0.9014 |
| 0.6261 | 9.0 | 7182 | 0.3295 | 0.9025 |
| 0.6188 | 10.0 | 7980 | 0.3322 | 0.9025 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
rohitagrawal-20/bert-finetuned-ner | rohitagrawal-20 | 2022-11-28T09:39:47Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-11-28T09:12:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.935969556585043
- name: Recall
type: recall
value: 0.9520363513968361
- name: F1
type: f1
value: 0.9439345903554145
- name: Accuracy
type: accuracy
value: 0.9868870312591982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
- Precision: 0.9360
- Recall: 0.9520
- F1: 0.9439
- Accuracy: 0.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0879 | 1.0 | 1756 | 0.0652 | 0.9236 | 0.9379 | 0.9307 | 0.9832 |
| 0.0343 | 2.0 | 3512 | 0.0614 | 0.9337 | 0.9510 | 0.9423 | 0.9864 |
| 0.019 | 3.0 | 5268 | 0.0599 | 0.9360 | 0.9520 | 0.9439 | 0.9869 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
hannoh/03_model_sales | hannoh | 2022-11-28T08:58:05Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-11-28T08:46:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: 03_model_sales
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 03_model_sales
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4693
- Accuracy: 0.7818
- F1: 0.7980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
alexziweiwang/retrain_epoch2to5 | alexziweiwang | 2022-11-28T08:51:14Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2022-11-28T08:35:03Z | ---
tags:
- generated_from_trainer
model-index:
- name: retrain_epoch2to5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain_epoch2to5
This model is a fine-tuned version of [alexziweiwang/retrain_first1epoch](https://huggingface.co/alexziweiwang/retrain_first1epoch) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3244
- Acc: 0.24
- Wer: 1.0
- Correct: 48
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:----:|:---:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 7.8494 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.6032 | 0.04 | 10 | 7.4834 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.6032 | 0.06 | 15 | 7.1350 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.3336 | 0.08 | 20 | 6.8284 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.3336 | 0.11 | 25 | 6.5577 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.2911 | 0.13 | 30 | 6.3124 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.2911 | 0.15 | 35 | 6.0850 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.9181 | 0.17 | 40 | 5.8888 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.9181 | 0.19 | 45 | 5.6815 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.7954 | 0.21 | 50 | 5.4834 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.7954 | 0.23 | 55 | 5.3099 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.4801 | 0.25 | 60 | 5.1678 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.4801 | 0.27 | 65 | 5.0223 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.3377 | 0.3 | 70 | 4.8893 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.3377 | 0.32 | 75 | 4.7743 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.2511 | 0.34 | 80 | 4.6494 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.2511 | 0.36 | 85 | 4.5307 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.727 | 0.38 | 90 | 4.4237 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.727 | 0.4 | 95 | 4.3263 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.7653 | 0.42 | 100 | 4.2439 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.7653 | 0.44 | 105 | 4.1589 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.4971 | 0.46 | 110 | 4.0847 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.4971 | 0.48 | 115 | 4.0118 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.0077 | 0.51 | 120 | 3.9382 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.0077 | 0.53 | 125 | 3.8663 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1693 | 0.55 | 130 | 3.8106 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1693 | 0.57 | 135 | 3.7580 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.0854 | 0.59 | 140 | 3.7123 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.0854 | 0.61 | 145 | 3.6720 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1988 | 0.63 | 150 | 3.6260 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1988 | 0.65 | 155 | 3.5853 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.9975 | 0.67 | 160 | 3.5463 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.9975 | 0.7 | 165 | 3.5122 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.6042 | 0.72 | 170 | 3.4862 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.6042 | 0.74 | 175 | 3.4631 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7347 | 0.76 | 180 | 3.4406 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7347 | 0.78 | 185 | 3.4202 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8336 | 0.8 | 190 | 3.4014 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8336 | 0.82 | 195 | 3.3855 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7454 | 0.84 | 200 | 3.3703 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7454 | 0.86 | 205 | 3.3576 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.525 | 0.89 | 210 | 3.3471 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.525 | 0.91 | 215 | 3.3392 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8175 | 0.93 | 220 | 3.3331 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8175 | 0.95 | 225 | 3.3289 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.307 | 0.97 | 230 | 3.3259 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.307 | 0.99 | 235 | 3.3244 | 0.24 | 1.0 | 48 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
huggingtweets/bobkerns | huggingtweets | 2022-11-28T08:14:20Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-11-28T08:14:12Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/3653376550/f40f9602f2e8e185eb7ddce332157ffe_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bob (Moderna #5) Kerns</div>
<div style="text-align: center; font-size: 14px;">@bobkerns</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bob (Moderna #5) Kerns.
| Data | Bob (Moderna #5) Kerns |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 315 |
| Short tweets | 42 |
| Tweets kept | 2877 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/390ksfue/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bobkerns's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3me25qi0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3me25qi0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bobkerns')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pere/whisper-NST2-unfreeze-constanti-low-lr | pere | 2022-11-28T07:41:42Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-11-23T10:34:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-NST2-unfreeze-constanti-low-lr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-NST2-unfreeze-constanti-low-lr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3562
- Wer: 8.5519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 96
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1901 | 0.05 | 1000 | 0.3069 | 14.8233 |
| 0.1323 | 0.1 | 2000 | 0.2687 | 11.2885 |
| 0.1137 | 0.15 | 3000 | 0.2620 | 10.8324 |
| 0.1022 | 0.2 | 4000 | 0.2976 | 9.0080 |
| 0.0937 | 0.25 | 5000 | 0.2584 | 9.5781 |
| 0.0875 | 0.3 | 6000 | 0.2704 | 20.2965 |
| 0.0592 | 1.05 | 7000 | 0.2751 | 9.0080 |
| 0.0488 | 1.1 | 8000 | 0.2778 | 8.6659 |
| 0.0475 | 1.15 | 9000 | 0.2792 | 9.4641 |
| 0.0439 | 1.2 | 10000 | 0.2880 | 8.3238 |
| 0.0425 | 1.25 | 11000 | 0.2954 | 8.5519 |
| 0.0416 | 1.3 | 12000 | 0.2896 | 20.2965 |
| 0.0289 | 2.05 | 13000 | 0.2990 | 7.9818 |
| 0.0229 | 2.1 | 14000 | 0.3027 | 7.4116 |
| 0.0248 | 2.15 | 15000 | 0.2968 | 8.6659 |
| 0.0225 | 2.2 | 16000 | 0.3100 | 8.5519 |
| 0.0222 | 2.25 | 17000 | 0.3132 | 9.3501 |
| 0.0219 | 2.3 | 18000 | 0.3230 | 7.6397 |
| 0.0162 | 3.04 | 19000 | 0.3380 | 9.8062 |
| 0.0132 | 3.09 | 20000 | 0.3562 | 8.5519 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
linfuyou/bert-squad-training | linfuyou | 2022-11-28T07:41:14Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-11-15T09:15:55Z | bert-base-cased-squadv1.1-training |
mtz2110/wav2vec2-large-xls-r-300m-he | mtz2110 | 2022-11-28T07:33:52Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-11-27T16:52:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-he
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fleurs
type: fleurs
config: he_il
split: train
args: he_il
metrics:
- name: Wer
type: wer
value: 0.5953778429933969
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-he
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.5954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.8899 | 0.99 | 200 | inf | 1.0 |
| 3.0802 | 1.98 | 400 | inf | 1.0 |
| 1.4275 | 2.97 | 600 | inf | 0.8155 |
| 0.8737 | 3.96 | 800 | inf | 0.7276 |
| 0.6503 | 4.95 | 1000 | inf | 0.6858 |
| 0.5176 | 5.94 | 1200 | inf | 0.6660 |
| 0.4084 | 6.93 | 1400 | inf | 0.6682 |
| 0.3469 | 7.92 | 1600 | inf | 0.6473 |
| 3.2485 | 6.67 | 1800 | inf | 1.0 |
| 0.6476 | 7.41 | 2000 | inf | 0.6574 |
| 0.3229 | 8.15 | 2200 | inf | 0.6499 |
| 0.2899 | 8.89 | 2400 | inf | 0.6376 |
| 0.26 | 9.63 | 2600 | inf | 0.6405 |
| 0.2038 | 10.37 | 2800 | inf | 0.6409 |
| 0.2158 | 11.11 | 3000 | inf | 0.6313 |
| 0.1892 | 11.85 | 3200 | inf | 0.6185 |
| 0.1611 | 12.59 | 3400 | inf | 0.6271 |
| 0.1584 | 13.33 | 3600 | inf | 0.6101 |
| 0.1443 | 14.07 | 3800 | inf | 0.6121 |
| 0.1353 | 14.81 | 4000 | inf | 0.6194 |
| 0.1109 | 15.56 | 4200 | inf | 0.6321 |
| 0.1116 | 16.3 | 4400 | inf | 0.6025 |
| 0.1054 | 17.04 | 4600 | inf | 0.6029 |
| 0.0966 | 17.78 | 4800 | inf | 0.6069 |
| 0.0824 | 18.52 | 5000 | inf | 0.5998 |
| 0.0812 | 19.26 | 5200 | inf | 0.5972 |
| 0.0749 | 20.0 | 5400 | inf | 0.5954 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
venetis/vit-base-patch16-224-in21k-finetuned-cifar10_album_vitVMMRdb_make_model_album_pred | venetis | 2022-11-28T07:33:09Z | 186 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-11-27T16:45:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10_album_vitVMMRdb_make_model_album_pred
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10_album_vitVMMRdb_make_model_album_pred
This model is a fine-tuned version of [aaraki/vit-base-patch16-224-in21k-finetuned-cifar10](https://huggingface.co/aaraki/vit-base-patch16-224-in21k-finetuned-cifar10) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5462
- Accuracy: 0.8594
- Precision: 0.8556
- Recall: 0.8594
- F1: 0.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 4.6112 | 1.0 | 839 | 4.5615 | 0.1425 | 0.0837 | 0.1425 | 0.0646 |
| 3.1177 | 2.0 | 1678 | 2.9595 | 0.4240 | 0.3424 | 0.4240 | 0.3283 |
| 2.0793 | 3.0 | 2517 | 2.0048 | 0.5771 | 0.5081 | 0.5771 | 0.5029 |
| 1.4566 | 4.0 | 3356 | 1.4554 | 0.6760 | 0.6333 | 0.6760 | 0.6280 |
| 1.1307 | 5.0 | 4195 | 1.1319 | 0.7350 | 0.7027 | 0.7350 | 0.7013 |
| 0.9367 | 6.0 | 5034 | 0.9328 | 0.7738 | 0.7546 | 0.7738 | 0.7503 |
| 0.7783 | 7.0 | 5873 | 0.8024 | 0.7986 | 0.7893 | 0.7986 | 0.7819 |
| 0.6022 | 8.0 | 6712 | 0.7187 | 0.8174 | 0.8098 | 0.8174 | 0.8055 |
| 0.5234 | 9.0 | 7551 | 0.6635 | 0.8313 | 0.8220 | 0.8313 | 0.8217 |
| 0.4298 | 10.0 | 8390 | 0.6182 | 0.8388 | 0.8337 | 0.8388 | 0.8302 |
| 0.3618 | 11.0 | 9229 | 0.5953 | 0.8455 | 0.8394 | 0.8455 | 0.8382 |
| 0.3262 | 12.0 | 10068 | 0.5735 | 0.8501 | 0.8443 | 0.8501 | 0.8436 |
| 0.3116 | 13.0 | 10907 | 0.5612 | 0.8527 | 0.8488 | 0.8527 | 0.8471 |
| 0.2416 | 14.0 | 11746 | 0.5524 | 0.8558 | 0.8500 | 0.8558 | 0.8496 |
| 0.2306 | 15.0 | 12585 | 0.5489 | 0.8572 | 0.8525 | 0.8572 | 0.8519 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Subsets and Splits