modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 18:27:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 18:27:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
esb/wav2vec2-aed-switchboard | esb | 2022-10-24T14:35:43Z | 3 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:ldc/switchboard",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-24T14:35:29Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- ldc/switchboard
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esb/datasets" \
--model_name_or_path="esb/wav2vec2-aed-pretrained" \
--dataset_config_name="switchboard" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-switchboard" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="2" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="260" \
--final_generation_num_beams="5" \
--generation_length_penalty="0.8" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esb/wav2vec2-aed-ami | esb | 2022-10-24T14:33:44Z | 4 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:edinburghcstr/ami",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-24T14:33:31Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- edinburghcstr/ami
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esb/datasets" \
--model_name_or_path="esb/wav2vec2-aed-pretrained" \
--dataset_config_name="ami" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-ami" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="225" \
--final_generation_num_beams="5" \
--generation_length_penalty="1.4" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esb/wav2vec2-aed-voxpopuli | esb | 2022-10-24T14:22:56Z | 4 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:facebook/voxpopuli",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-24T14:22:42Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- facebook/voxpopuli
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esb/datasets" \
--model_name_or_path="esb/wav2vec2-aed-pretrained" \
--dataset_config_name="voxpopuli" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-voxpopuli" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="1" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="10001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="225" \
--final_generation_num_beams="5" \
--generation_length_penalty="0.8" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esb/wav2vec2-ctc-earnings22 | esb | 2022-10-24T14:09:53Z | 4 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:revdotcom/earnings22",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-24T14:09:46Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- revdotcom/earnings22
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-earnings22-tokenizer" \
--dataset_name="esb/datasets" \
--dataset_config_name="earnings22" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-earnings22" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
esb/wav2vec2-ctc-spgispeech | esb | 2022-10-24T14:08:51Z | 4 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:kensho/spgispeech",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-24T14:08:44Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- kensho/spgispeech
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-spgispeech-tokenizer" \
--dataset_name="esb/datasets" \
--dataset_config_name="spgispeech" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-spgispeech" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
esb/wav2vec2-ctc-tedlium | esb | 2022-10-24T13:59:30Z | 3 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:LIUM/tedlium",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-10-24T13:59:22Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- LIUM/tedlium
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-tedlium-tokenizer" \
--dataset_name="esb/datasets" \
--dataset_config_name="tedlium" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-tedlium" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
edbeeching/doom_duel_bots_3333 | edbeeching | 2022-10-24T13:20:02Z | 2 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:19:37Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_duel_bots
type: doom_duel_bots
metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_duel_bots** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_health_gathering_supreme_3333 | edbeeching | 2022-10-24T13:17:54Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:17:29Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 66.00 +/- 0.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_defend_the_center_3333 | edbeeching | 2022-10-24T13:15:57Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:15:33Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_defend_the_center
type: doom_defend_the_center
metrics:
- type: mean_reward
value: 24.00 +/- 1.41
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_defend_the_center** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_deadly_corridor_3333 | edbeeching | 2022-10-24T13:15:22Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:14:56Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_deadly_corridor
type: doom_deadly_corridor
metrics:
- type: mean_reward
value: 16.67 +/- 9.37
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_deadly_corridor** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_my_way_home_flat_actions_3333 | edbeeching | 2022-10-24T13:13:28Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:13:03Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_my_way_home_flat_actions
type: doom_my_way_home_flat_actions
metrics:
- type: mean_reward
value: 0.98 +/- 0.01
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_my_way_home_flat_actions** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_two_colors_easy_3333 | edbeeching | 2022-10-24T13:12:47Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:11:50Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_two_colors_easy
type: doom_two_colors_easy
metrics:
- type: mean_reward
value: 59.00 +/- 0.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_two_colors_easy** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_basic_3333 | edbeeching | 2022-10-24T13:11:37Z | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:11:12Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_basic
type: doom_basic
metrics:
- type: mean_reward
value: 0.77 +/- 0.12
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_basic** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_duel_bots_2222 | edbeeching | 2022-10-24T13:10:12Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:09:46Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_duel_bots
type: doom_duel_bots
metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_duel_bots** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_battle2_2222 | edbeeching | 2022-10-24T13:09:30Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:09:03Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_battle2
type: doom_battle2
metrics:
- type: mean_reward
value: 30.93 +/- 0.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_battle2** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_health_gathering_supreme_2222 | edbeeching | 2022-10-24T13:08:13Z | 5 | 1 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:07:44Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 67.00 +/- 0.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_deadly_corridor_2222 | edbeeching | 2022-10-24T13:05:41Z | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:05:20Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_deadly_corridor
type: doom_deadly_corridor
metrics:
- type: mean_reward
value: 19.24 +/- 7.44
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_deadly_corridor** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_defend_the_center_flat_actions_2222 | edbeeching | 2022-10-24T13:04:29Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:04:03Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_defend_the_center_flat_actions
type: doom_defend_the_center_flat_actions
metrics:
- type: mean_reward
value: 24.67 +/- 0.47
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_defend_the_center_flat_actions** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_two_colors_easy_2222 | edbeeching | 2022-10-24T13:03:16Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T13:02:49Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_two_colors_easy
type: doom_two_colors_easy
metrics:
- type: mean_reward
value: 58.00 +/- 0.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_two_colors_easy** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_deadly_corridor_1111 | edbeeching | 2022-10-24T12:57:52Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T12:57:29Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_deadly_corridor
type: doom_deadly_corridor
metrics:
- type: mean_reward
value: 17.08 +/- 9.07
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_deadly_corridor** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_my_way_home_1111 | edbeeching | 2022-10-24T12:57:16Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T12:56:52Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_my_way_home
type: doom_my_way_home
metrics:
- type: mean_reward
value: 0.98 +/- 0.01
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_my_way_home** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_defend_the_center_flat_actions_1111 | edbeeching | 2022-10-24T12:56:39Z | 2 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T12:56:15Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_defend_the_center_flat_actions
type: doom_defend_the_center_flat_actions
metrics:
- type: mean_reward
value: 24.67 +/- 0.47
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_defend_the_center_flat_actions** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_my_way_home_flat_actions_1111 | edbeeching | 2022-10-24T12:56:04Z | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T12:55:38Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_my_way_home_flat_actions
type: doom_my_way_home_flat_actions
metrics:
- type: mean_reward
value: 0.98 +/- 0.01
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_my_way_home_flat_actions** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/doom_basic_1111 | edbeeching | 2022-10-24T12:54:44Z | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T12:54:23Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_basic
type: doom_basic
metrics:
- type: mean_reward
value: 0.75 +/- 0.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_basic** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
yuk/my-dream-booth-models | yuk | 2022-10-24T12:51:08Z | 0 | 5 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2022-10-21T12:42:39Z | ---
license: bigscience-bloom-rail-1.0
---
* fuyuko_wd13_3000-pruned.ckpt
* 冬優子ちゃんをDreamBoothしたモデルです
* asahi_wd13_3000.ckpt
* あさひちゃんをDreamBoothしたモデルです
* madoka_wd13_2000.ckpt
* 樋口円香をDreamBoothしたモデルです
* gothic_wd13_3000.ckpt
* WaifuDiffusionで出力したゴスロリキャラをDreamBoothしたモデルです
* 陰影が深い感じがあるので、標準のAIの薄っぺらい出力に飽きたら使ってみて下さい
* gothic_ikemen_wd13_3000.ckpt
* gothic_wd13_3000で出力したスーツイケメンをDreamBoothしたモデルです
* イケメンに女装させたいときに便利です
|
qanastek/FrenchMedMCQA-BioBERT-V1.1-Wikipedia-BM25 | qanastek | 2022-10-24T12:38:40Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"fr",
"dataset:FrenchMedMCQA",
"arxiv:1910.03771",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-21T11:30:34Z | ---
language: fr
datasets:
- FrenchMedMCQA
license: apache-2.0
model-index:
- name: qanastek/FrenchMedMCQA-BioBERT-V1.1-Wikipedia-BM25
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: FrenchMedMCQA
type: FrenchMedMCQA
config: FrenchMedMCQA
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 16.72
verified: true
- name: Hamming Score
type: hamming score
value: 38.72
verified: true
widget:
- text: "Quels sont les signes cliniques retrouvés dans l'intoxication par la digoxine ? : \n (A) Douleur oculaire (B) Troubles digestifs (C) BAV (D) Hallucinations (E) Hyperthermie\n Intoxication par les venins d'animaux"
---
# FrenchMedMCQA : Multiple-choice question answering on pharmacology exams using BioBERT V1.1, Wikipedia external knowledge and BM25 retriever
- Corpora: [FrenchMedMCQA](https://github.com/qanastek/FrenchMedMCQA)
- Model: [BioBERT V1.1](https://huggingface.co/dmis-lab/biobert-v1.1)
- Number of Epochs: 10
**People Involved**
* [Yanis LABRAK](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
* [Adrien BAZOGE](https://fr.linkedin.com/in/adrien-bazoge-6b511b145) (2)
* [Richard DUFOUR](https://cv.archives-ouvertes.fr/richard-dufour) (2)
* [Béatrice DAILLE](https://scholar.google.com/citations?user=-damXYEAAAAJ&hl=fr) (2)
* [Pierre-Antoine GOURRAUD](https://fr.linkedin.com/in/pierre-antoine-gourraud-35779b6) (3)
* [Emmanuel MORIN](https://scholar.google.fr/citations?user=tvTEtM0AAAAJ&hl=fr) (2)
* [Mickael ROUVIER](https://scholar.google.fr/citations?user=0fmu-VsAAAAJ&hl=fr) (1)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
2. [LS2N, TALN team](https://www.ls2n.fr/equipe/taln/), Nantes University, Nantes, France.
3. [CHU Nantes](https://www.chu-nantes.fr/), Nantes University, Nantes, France.
## Demo: How to use in HuggingFace Transformers
Requires [Transformers](https://pypi.org/project/transformers/): ```pip install transformers```
```python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
path_model = "qanastek/FrenchMedMCQA-BioBERT-V1.1-Wikipedia-BM25"
tokenizer = AutoTokenizer.from_pretrained(path_model)
model = AutoModelForSequenceClassification.from_pretrained(path_model)
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=False, device=0) # GPU
dataset = load_dataset("qanastek/FrenchMedMCQA")["test"]
for e in dataset:
prediction = pipeline(e["bert_text"], truncation=True, max_length=model.config.max_position_embeddings)
```
Output:

## Training data
The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers.
| # Answers | Training | Validation | Test | Total |
|:---------:|:--------:|:----------:|:----:|:-----:|
| 1 | 595 | 164 | 321 | 1,080 |
| 2 | 528 | 45 | 97 | 670 |
| 3 | 718 | 71 | 141 | 930 |
| 4 | 296 | 30 | 56 | 382 |
| 5 | 34 | 2 | 7 | 43 |
| Total | 2171 | 312 | 622 | 3,105 |
## Evaluation results
The test corpora used for this evaluation is available on [Github](https://github.com/qanastek/FrenchMedMCQA).
| Architecture | Hamming | EMR | Hamming | EMR | Hamming | EMR | Hamming | EMR | Hamming | EMR |
|:----------------:|:-------:|:-----:|:-------:|:-----:|:-------:|:-----:|:-------:|:-----:|:-------:|:-----:|
| BioBERT V1.1 | 36.19 | 15.43 | **38.72** | 16.72 | 33.33 | 14.14 | 35.13 | 16.23 | 34.27 | 13.98 |
| PubMedBERT | 33.98 | 14.14 | 34.00 | 13.98 | 35.66 | 15.59 | 33.87 | 14.79 | 35.44 | 14.79 |
| CamemBERT-base | 36.24 | 16.55 | 34.19 | 14.46 | 34.78 | 15.43 | 34.66 | 14.79 | 34.61 | 14.95 |
| XLM-RoBERTa-base | 37.92 | 17.20 | 31.26 | 11.89 | 35.84 | 16.07 | 32.47 | 14.63 | 33.00 | 14.95 |
| BART-base | 31.93 | 15.91 | 34.98 | **18.64** | 33.80 | 17.68 | 29.65 | 12.86 | 34.65 | 18.32 |
## BibTeX Citations
Please cite the following paper when using this model.
FrenchMedMCQA corpus and linked tools:
```latex
@unpublished{labrak:hal-03824241,
TITLE = {{FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain}},
AUTHOR = {Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Daille, B{\'e}atrice and Gourraud, Pierre-Antoine and Morin, Emmanuel and Rouvier, Mickael},
URL = {https://hal.archives-ouvertes.fr/hal-03824241},
NOTE = {working paper or preprint},
YEAR = {2022},
MONTH = Oct,
PDF = {https://hal.archives-ouvertes.fr/hal-03824241/file/LOUHI_2022___QA-3.pdf},
HAL_ID = {hal-03824241},
HAL_VERSION = {v1},
}
```
HuggingFace's Transformers :
```latex
@misc{https://doi.org/10.48550/arxiv.1910.03771,
doi = {10.48550/ARXIV.1910.03771},
url = {https://arxiv.org/abs/1910.03771},
author = {Wolf, Thomas and Debut, Lysandre and Sanh, Victor and Chaumond, Julien and Delangue, Clement and Moi, Anthony and Cistac, Pierric and Rault, Tim and Louf, Rémi and Funtowicz, Morgan and Davison, Joe and Shleifer, Sam and von Platen, Patrick and Ma, Clara and Jernite, Yacine and Plu, Julien and Xu, Canwen and Scao, Teven Le and Gugger, Sylvain and Drame, Mariama and Lhoest, Quentin and Rush, Alexander M.},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {HuggingFace's Transformers: State-of-the-art Natural Language Processing},
publisher = {arXiv},
year = {2019},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
## Acknowledgment
This work was financially supported by [Zenidoc](https://zenidoc.fr/), the [DIETS](https://anr-diets.univ-avignon.fr/) project financed by the Agence Nationale de la Recherche (ANR) under contract ANR-20-CE23-0005 and the ANR [AIBy4](https://aiby4.ls2n.fr/) (ANR-20-THIA-0011).
|
esc-bench/conformer-rnnt-chime4 | esc-bench | 2022-10-24T12:00:47Z | 4 | 0 | nemo | [
"nemo",
"esb",
"en",
"dataset:esb/datasets",
"dataset:ldc/chime-4",
"region:us"
]
| null | 2022-10-03T09:20:28Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- ldc/chime-4
---
To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esb/datasets" \
--dataset_config_name="chime4" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--max_steps="100000" \
--output_dir="./" \
--run_name="conformer-rnnt-chime4" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
esc-bench/conformer-rnnt-ami | esc-bench | 2022-10-24T11:57:41Z | 5 | 0 | nemo | [
"nemo",
"esb",
"en",
"dataset:esb/datasets",
"dataset:edinburghcstr/ami",
"region:us"
]
| null | 2022-10-03T09:37:54Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- edinburghcstr/ami
---
To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esb/datasets" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--max_steps="100000" \
--dataset_config_name="ami" \
--output_dir="./" \
--run_name="conformer-rnnt-ami" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
esc-bench/conformer-rnnt-spgispeech | esc-bench | 2022-10-24T11:53:23Z | 4 | 0 | nemo | [
"nemo",
"esb",
"en",
"dataset:esb/datasets",
"dataset:kensho/spgispeech",
"region:us"
]
| null | 2022-10-03T08:54:36Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- kensho/spgispeech
---
To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esb/datasets" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--max_steps="100000" \
--dataset_config_name="spgispeech" \
--output_dir="./" \
--run_name="conformer-rnnt-spgispeech" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
esc-bench/conformer-rnnt-voxpopuli | esc-bench | 2022-10-24T11:50:27Z | 4 | 0 | nemo | [
"nemo",
"esb",
"en",
"dataset:esb/datasets",
"dataset:facebook/voxpopuli",
"region:us"
]
| null | 2022-10-03T08:52:33Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- facebook/voxpopuli
---
To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esb/datasets" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--max_steps="100000" \
--dataset_config_name="voxpopuli" \
--output_dir="./" \
--run_name="conformer-rnnt-voxpopuli" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
Aunsiels/ChildGPT | Aunsiels | 2022-10-24T11:37:39Z | 21 | 3 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"children",
"infant",
"en",
"dataset:Aunsiels/InfantBooks",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-19T08:59:58Z | ---
language:
- en
tags:
- children
- infant
datasets:
- Aunsiels/InfantBooks
---
A GPT2-model finetuned on children's books.
```
Romero, J., & Razniewski, S. (2022).
Do Children Texts Hold The Key To Commonsense Knowledge?
In Proceedings of the 2022 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning.
``` |
esc-bench/whisper-aed-chime4 | esc-bench | 2022-10-24T11:37:38Z | 0 | 0 | null | [
"esb",
"en",
"dataset:esb/datasets",
"dataset:ldc/chime-4",
"region:us"
]
| null | 2022-10-03T08:02:41Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- ldc/chime-4
---
To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper):
```
pip install git+https://github.com/openai/whisper.git
```
Then execute the command:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esb/datasets" \
--dataset_config_name="chime4" \
--max_steps="2500" \
--output_dir="./" \
--run_name="whisper-chime4" \
--dropout_rate="0.1" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="500" \
--save_strategy="steps" \
--save_steps="500" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-bench/whisper-aed-switchboard | esc-bench | 2022-10-24T11:37:35Z | 0 | 0 | null | [
"esb",
"en",
"dataset:esb/datasets",
"dataset:ldc/switchboard",
"region:us"
]
| null | 2022-10-03T07:54:54Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- ldc/switchboard
---
To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper):
```
pip install git+https://github.com/openai/whisper.git
```
Then execute the command:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esb/datasets" \
--dataset_config_name="switchboard" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-switchboard" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-switchboard" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-bench/whisper-aed-earnings22 | esc-bench | 2022-10-24T11:37:30Z | 0 | 1 | null | [
"esb",
"en",
"dataset:esb/datasets",
"dataset:revdotcom/earnings22",
"region:us"
]
| null | 2022-10-03T08:00:03Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- revdotcom/earnings22
---
To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper):
```
pip install git+https://github.com/openai/whisper.git
```
Then execute the command:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esb/datasets" \
--dataset_config_name="earnings22" \
--max_steps="2500" \
--output_dir="./" \
--run_name="whisper-earnings22" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="500" \
--save_strategy="steps" \
--save_steps="500" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-bench/whisper-aed-gigaspeech | esc-bench | 2022-10-24T11:37:24Z | 0 | 0 | null | [
"esb",
"en",
"dataset:esb/datasets",
"dataset:speechcolab/gigaspeech",
"region:us"
]
| null | 2022-10-03T07:57:50Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- speechcolab/gigaspeech
---
To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper):
```
pip install git+https://github.com/openai/whisper.git
```
Then execute the command:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esb/datasets" \
--dataset_config_name="gigaspeech" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-gigaspeech" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-bench/whisper-aed-voxpopuli | esc-bench | 2022-10-24T11:37:21Z | 0 | 0 | null | [
"esb",
"en",
"dataset:esb/datasets",
"dataset:facebook/voxpopuli",
"region:us"
]
| null | 2022-10-03T09:41:59Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- facebook/voxpopuli
---
To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper):
```
pip install git+https://github.com/openai/whisper.git
```
Then execute the command:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esb/datasets" \
--dataset_config_name="voxpopuli" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-voxpopuli" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="500" \
--save_strategy="steps" \
--save_steps="500" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-bench/whisper-aed-librispeech | esc-bench | 2022-10-24T11:37:12Z | 0 | 0 | null | [
"esb",
"en",
"dataset:esb/datasets",
"dataset:librispeech_asr",
"region:us"
]
| null | 2022-10-03T07:46:24Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- librispeech_asr
---
To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper):
```
pip install git+https://github.com/openai/whisper.git
```
Then execute the command:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esb/datasets" \
--dataset_config_name="librispeech" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-librispeech" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
asparius/balanced-combined-bert | asparius | 2022-10-24T11:03:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T10:52:49Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: balanced-combined-bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# balanced-combined-bert
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4475
- Accuracy: 0.825
- F1: 0.8141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
coastalcph/danish-legal-longformer-eurlex-sd | coastalcph | 2022-10-24T10:54:06Z | 15 | 3 | transformers | [
"transformers",
"pytorch",
"longformer",
"text-classification",
"dataset:multi_eurlex",
"arxiv:2011.09468",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T10:42:46Z | ---
widget:
- text: "KOMMISSIONENS BESLUTNING\naf 6. marts 2006\nom klassificering af visse byggevarers ydeevne med hensyn til reaktion ved brand for så vidt angår trægulve samt vægpaneler og vægbeklædning i massivt træ\n(meddelt under nummer K(2006) 655"
datasets:
- multi_eurlex
metrics:
- f1
model-index:
- name: coastalcph/danish-legal-longformer-eurlex-sd
results:
- task:
type: text-classification
name: Danish EURLEX (Level 2)
dataset:
name: multi_eurlex
type: multi_eurlex
config: multi_eurlex
split: validation
metrics:
- name: Micro-F1
type: micro-f1
value: 0.76144
- name: Macro-F1
type: macro-f1
value: 0.52878
---
# Model description
This model is a fine-tuned version of [coastalcph/danish-legal-longformer-base](https://huggingface.co/coastalcph/danish-legal-longformer-base) on the Danish part of [MultiEURLEX](https://huggingface.co/datasets/multi_eurlex) dataset using an additional Spectral Decoupling penalty ([Pezeshki et al., 2020](https://arxiv.org/abs/2011.09468).
## Training and evaluation data
The Danish part of [MultiEURLEX](https://huggingface.co/datasets/multi_eurlex) dataset.
## Use of Model
### As a text classifier:
```python
from transformers import pipeline
import numpy as np
# Init text classification pipeline
text_cls_pipe = pipeline(task="text-classification",
model="coastalcph/danish-legal-longformer-eurlex-sd",
use_auth_token='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')
# Encode and Classify document
predictions = text_cls_pipe("KOMMISSIONENS BESLUTNING\naf 6. marts 2006\nom klassificering af visse byggevarers "
"ydeevne med hensyn til reaktion ved brand for så vidt angår trægulve samt vægpaneler "
"og vægbeklædning i massivt træ\n(meddelt under nummer K(2006) 655")
# Print prediction
print(predictions)
# [{'label': 'building and public works', 'score': 0.9626012444496155}]
```
### As a feature extractor (document embedder):
```python
from transformers import pipeline
import numpy as np
# Init feature extraction pipeline
feature_extraction_pipe = pipeline(task="feature-extraction",
model="coastalcph/danish-legal-longformer-eurlex-sd",
use_auth_token='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')
# Encode document
predictions = feature_extraction_pipe("KOMMISSIONENS BESLUTNING\naf 6. marts 2006\nom klassificering af visse byggevarers "
"ydeevne med hensyn til reaktion ved brand for så vidt angår trægulve samt vægpaneler "
"og vægbeklædning i massivt træ\n(meddelt under nummer K(2006) 655")
# Use CLS token representation as document embedding
document_features = token_wise_features[0][0]
print(document_features.shape)
# (768,)
```
## Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0+cu113
- Datasets 2.0.0
- Tokenizers 0.12.1
|
esc-bench/wav2vec2-aed-earnings22 | esc-bench | 2022-10-24T10:48:40Z | 5 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:revdotcom/earnings22",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T14:39:19Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- revdotcom/earnings22
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esb/datasets" \
--model_name_or_path="esb/wav2vec2-aed-pretrained" \
--dataset_config_name="earnings22" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-earnings22" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="25" \
--max_steps="50000" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--generation_length_penalty="1.2" \
--final_generation_max_length="200" \
--final_generation_num_beams="5" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-aed-gigaspeech | esc-bench | 2022-10-24T10:45:47Z | 5 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:speechcolab/gigaspeech",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T14:39:14Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- speechcolab/gigaspeech
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esb/datasets" \
--model_name_or_path="esb/wav2vec2-aed-pretrained" \
--dataset_config_name="gigaspeech" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-gigaspeech" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="2" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="200" \
--final_generation_num_beams="14" \
--generation_length_penalty="1.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-aed-tedlium | esc-bench | 2022-10-24T10:42:16Z | 7 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:LIUM/tedlium",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T14:39:08Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- LIUM/tedlium
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esb/datasets" \
--model_name_or_path="esb/wav2vec2-aed-tedlium" \
--dataset_config_name="tedlium" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-tedlium" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="2" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="250" \
--final_generation_num_beams="12" \
--generation_length_penalty="1.5" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-aed-common_voice | esc-bench | 2022-10-24T10:39:50Z | 5 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:mozilla-foundation/common_voice_9_0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T14:39:06Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- mozilla-foundation/common_voice_9_0
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esb/datasets" \
--model_name_or_path="esb/wav2vec2-aed-pretrained" \
--dataset_config_name="common_voice" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-common-voice" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="2" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="200" \
--generation_num_beams="14" \
--generation_length_penalty="1.2" \
--max_eval_duration_in_seconds="20" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-aed-librispeech | esc-bench | 2022-10-24T10:37:46Z | 4 | 0 | transformers | [
"transformers",
"jax",
"speech-encoder-decoder",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T14:39:03Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- librispeech_asr
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="esb/datasets" \
--model_name_or_path="esb/wav2vec2-aed-pretrained" \
--dataset_config_name="librispeech" \
--output_dir="./" \
--wandb_name="wav2vec2-aed-librispeech" \
--wandb_project="wav2vec2-aed" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="2" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--logging_steps="25" \
--max_steps="50001" \
--eval_steps="10000" \
--save_steps="10000" \
--generation_max_length="40" \
--generation_num_beams="1" \
--final_generation_max_length="300" \
--final_generation_num_beams="12" \
--generation_length_penalty="1.6" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--predict_with_generate \
--do_eval \
--do_train \
--do_predict \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-ctc-switchboard | esc-bench | 2022-10-24T10:34:16Z | 5 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:ldc/switchboard",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T16:39:37Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- ldc/switchboard
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-switchboard-tokenizer" \
--dataset_name="esb/datasets" \
--dataset_config_name="switchboard" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-switchboard" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-ctc-earnings22 | esc-bench | 2022-10-24T10:32:37Z | 3 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:revdotcom/earnings22",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T16:36:27Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- revdotcom/earnings22
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-earnings22-tokenizer" \
--dataset_name="esb/datasets" \
--dataset_config_name="earnings22" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-earnings22" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-ctc-spgispeech | esc-bench | 2022-10-24T10:31:52Z | 5 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:kensho/spgispeech",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T16:34:49Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- kensho/spgispeech
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-spgispeech-tokenizer" \
--dataset_name="esb/datasets" \
--dataset_config_name="spgispeech" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-spgispeech" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-ctc-voxpopuli | esc-bench | 2022-10-24T10:30:10Z | 3 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:facebook/voxpopuli",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T16:31:35Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- facebook/voxpopuli
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-voxpopuli-tokenizer" \
--dataset_name="esb/datasets" \
--dataset_config_name="voxpopuli" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-voxpopuli" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--per_device_eval_batch_size="1" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-ctc-common_voice | esc-bench | 2022-10-24T10:28:36Z | 5 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:mozilla-foundation/common_voice_9_0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T16:29:51Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- mozilla-foundation/common_voice_9_0
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-common_voice-tokenizer" \
--dataset_name="esb/datasets" \
--dataset_config_name="common_voice" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-common-voice" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--max_eval_duration_in_seconds="20" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
esc-bench/wav2vec2-ctc-librispeech | esc-bench | 2022-10-24T10:27:51Z | 3 | 0 | transformers | [
"transformers",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"esb",
"en",
"dataset:esb/datasets",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-30T16:48:31Z | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- librispeech_asr
---
To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system:
```python
#!/usr/bin/env bash
python run_flax_speech_recognition_ctc.py \
--model_name_or_path="esb/wav2vec2-ctc-pretrained" \
--tokenizer_name="wav2vec2-ctc-librispeech-tokenizer" \
--dataset_name="esb/datasets" \
--dataset_config_name="librispeech" \
--output_dir="./" \
--wandb_project="wav2vec2-ctc" \
--wandb_name="wav2vec2-ctc-librispeech" \
--max_steps="50000" \
--save_steps="10000" \
--eval_steps="10000" \
--learning_rate="3e-4" \
--logging_steps="25" \
--warmup_steps="5000" \
--preprocessing_num_workers="1" \
--hidden_dropout="0.2" \
--activation_dropout="0.2" \
--feat_proj_dropout="0.2" \
--do_train \
--do_eval \
--do_predict \
--overwrite_output_dir \
--gradient_checkpointing \
--freeze_feature_encoder \
--push_to_hub \
--use_auth_token
```
|
pcoloc/autotrain-dragino-7-7-max_300m-1861063640 | pcoloc | 2022-10-24T10:21:21Z | 8 | 0 | transformers | [
"transformers",
"joblib",
"autotrain",
"tabular",
"regression",
"tabular-regression",
"dataset:pcoloc/autotrain-data-dragino-7-7-max_300m",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| tabular-regression | 2022-10-24T10:20:56Z | ---
tags:
- autotrain
- tabular
- regression
- tabular-regression
datasets:
- pcoloc/autotrain-data-dragino-7-7-max_300m
co2_eq_emissions:
emissions: 0.12860686048945302
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 1861063640
- CO2 Emissions (in grams): 0.1286
## Validation Metrics
- Loss: 50.918
- R2: 0.304
- MSE: 2592.667
- MAE: 39.693
- RMSLE: 0.429
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
``` |
pcoloc/autotrain-dragino-7-7-max_495m-1860863627 | pcoloc | 2022-10-24T10:14:36Z | 5 | 0 | transformers | [
"transformers",
"joblib",
"autotrain",
"tabular",
"regression",
"tabular-regression",
"dataset:pcoloc/autotrain-data-dragino-7-7-max_495m",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| tabular-regression | 2022-10-24T10:11:33Z | ---
tags:
- autotrain
- tabular
- regression
- tabular-regression
datasets:
- pcoloc/autotrain-data-dragino-7-7-max_495m
co2_eq_emissions:
emissions: 0.011242326266844769
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 1860863627
- CO2 Emissions (in grams): 0.0112
## Validation Metrics
- Loss: 72.730
- R2: 0.386
- MSE: 5289.600
- MAE: 60.230
- RMSLE: 0.436
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
``` |
teacookies/autotrain-24102022-cert7-1860363608 | teacookies | 2022-10-24T10:14:31Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-24102022-cert7",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-24T10:03:38Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-24102022-cert7
co2_eq_emissions:
emissions: 0.0825722192587215
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1860363608
- CO2 Emissions (in grams): 0.0826
## Validation Metrics
- Loss: 0.002
- Accuracy: 0.999
- Precision: 0.972
- Recall: 0.983
- F1: 0.978
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-24102022-cert7-1860363608
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-24102022-cert7-1860363608", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-24102022-cert7-1860363608", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
edbeeching/mujoco_hopper_1111 | edbeeching | 2022-10-24T09:58:39Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T09:38:16Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_hopper
type: mujoco_hopper
metrics:
- type: mean_reward
value: 1482.86 +/- 541.04
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **mujoco_hopper** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
haoanh98/mbart_base | haoanh98 | 2022-10-24T09:55:16Z | 3 | 0 | transformers | [
"transformers",
"tf",
"mbart",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-10-24T09:54:32Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: mbart_base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mbart_base
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Tokenizers 0.13.1
|
edbeeching/mujoco_swimmer_1111 | edbeeching | 2022-10-24T09:40:39Z | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T09:40:24Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_swimmer
type: mujoco_swimmer
metrics:
- type: mean_reward
value: 95.68 +/- 3.27
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **mujoco_swimmer** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/mujoco_doublependulum_1111 | edbeeching | 2022-10-24T09:39:25Z | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-24T09:39:10Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_doublependulum
type: mujoco_doublependulum
metrics:
- type: mean_reward
value: 9352.98 +/- 0.61
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **mujoco_doublependulum** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
teacookies/autotrain-24102022-cert6-1859663573 | teacookies | 2022-10-24T09:23:05Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-24102022-cert6",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-24T09:11:22Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-24102022-cert6
co2_eq_emissions:
emissions: 19.238000251078862
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1859663573
- CO2 Emissions (in grams): 19.2380
## Validation Metrics
- Loss: 0.002
- Accuracy: 0.999
- Precision: 0.964
- Recall: 0.974
- F1: 0.969
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-24102022-cert6-1859663573
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-24102022-cert6-1859663573", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-24102022-cert6-1859663573", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-user-1860163585 | Ahmed-Abousetta | 2022-10-24T09:13:23Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-user",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T09:12:32Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-user
co2_eq_emissions:
emissions: 1.0008458491802985
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1860163585
- CO2 Emissions (in grams): 1.0008
## Validation Metrics
- Loss: 0.304
- Accuracy: 0.890
- Precision: 0.729
- Recall: 0.714
- AUC: 0.889
- F1: 0.722
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-user-1860163585
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163585", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163585", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-user-1860163583 | Ahmed-Abousetta | 2022-10-24T09:13:09Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-user",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T09:12:22Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-user
co2_eq_emissions:
emissions: 0.6436453501778651
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1860163583
- CO2 Emissions (in grams): 0.6436
## Validation Metrics
- Loss: 0.344
- Accuracy: 0.869
- Precision: 0.698
- Recall: 0.612
- AUC: 0.856
- F1: 0.652
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-user-1860163583
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163583", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163583", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963567 | Ahmed-Abousetta | 2022-10-24T09:06:18Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-interaction",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T09:05:25Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-interaction
co2_eq_emissions:
emissions: 1.0555869183889894
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859963567
- CO2 Emissions (in grams): 1.0556
## Validation Metrics
- Loss: 0.263
- Accuracy: 0.910
- Precision: 0.945
- Recall: 0.923
- AUC: 0.945
- F1: 0.934
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963567
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963567", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963567", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963565 | Ahmed-Abousetta | 2022-10-24T09:06:06Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-interaction",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T09:05:13Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-interaction
co2_eq_emissions:
emissions: 0.6502317465394943
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859963565
- CO2 Emissions (in grams): 0.6502
## Validation Metrics
- Loss: 0.241
- Accuracy: 0.922
- Precision: 0.936
- Recall: 0.953
- AUC: 0.951
- F1: 0.944
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963565
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963565", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963565", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963564 | Ahmed-Abousetta | 2022-10-24T09:05:56Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-interaction",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T09:05:08Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-interaction
co2_eq_emissions:
emissions: 0.8413403809338463
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859963564
- CO2 Emissions (in grams): 0.8413
## Validation Metrics
- Loss: 0.268
- Accuracy: 0.902
- Precision: 0.905
- Recall: 0.959
- AUC: 0.954
- F1: 0.931
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963564
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963564", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963564", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963563 | Ahmed-Abousetta | 2022-10-24T09:05:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-interaction",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T09:05:07Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-interaction
co2_eq_emissions:
emissions: 0.7644156643824811
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859963563
- CO2 Emissions (in grams): 0.7644
## Validation Metrics
- Loss: 0.244
- Accuracy: 0.910
- Precision: 0.935
- Recall: 0.935
- AUC: 0.954
- F1: 0.935
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963563
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963563", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963563", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-information-1859863561 | Ahmed-Abousetta | 2022-10-24T09:02:06Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-information",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T09:01:02Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-information
co2_eq_emissions:
emissions: 1.5884381963682959
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859863561
- CO2 Emissions (in grams): 1.5884
## Validation Metrics
- Loss: 0.338
- Accuracy: 0.869
- Precision: 0.836
- Recall: 0.868
- AUC: 0.932
- F1: 0.852
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-information-1859863561
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863561", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863561", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-information-1859863560 | Ahmed-Abousetta | 2022-10-24T09:01:53Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-information",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T09:00:57Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-information
co2_eq_emissions:
emissions: 1.8754846173690543
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859863560
- CO2 Emissions (in grams): 1.8755
## Validation Metrics
- Loss: 0.331
- Accuracy: 0.878
- Precision: 0.852
- Recall: 0.868
- AUC: 0.927
- F1: 0.860
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-information-1859863560
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863560", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863560", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-information-1859863559 | Ahmed-Abousetta | 2022-10-24T09:01:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-information",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T09:00:51Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-information
co2_eq_emissions:
emissions: 0.6822182565490778
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859863559
- CO2 Emissions (in grams): 0.6822
## Validation Metrics
- Loss: 0.353
- Accuracy: 0.853
- Precision: 0.857
- Recall: 0.792
- AUC: 0.931
- F1: 0.824
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-information-1859863559
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863559", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863559", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-cognition-auto-1859563553 | Ahmed-Abousetta | 2022-10-24T08:55:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-cognition-auto",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T08:54:32Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-cognition-auto
co2_eq_emissions:
emissions: 1.7868012751172693
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859563553
- CO2 Emissions (in grams): 1.7868
## Validation Metrics
- Loss: 0.382
- Accuracy: 0.854
- Precision: 0.811
- Recall: 0.843
- AUC: 0.915
- F1: 0.827
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-cognition-auto-1859563553
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-auto-1859563553", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-auto-1859563553", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363551 | Ahmed-Abousetta | 2022-10-24T08:46:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-cognition",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T08:45:09Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-cognition
co2_eq_emissions:
emissions: 1.7828199447393138
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859363551
- CO2 Emissions (in grams): 1.7828
## Validation Metrics
- Loss: 0.372
- Accuracy: 0.858
- Precision: 0.796
- Recall: 0.882
- AUC: 0.919
- F1: 0.837
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363551
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363551", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363551", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363550 | Ahmed-Abousetta | 2022-10-24T08:45:55Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-cognition",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T08:45:00Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-cognition
co2_eq_emissions:
emissions: 1.173820365058826
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859363550
- CO2 Emissions (in grams): 1.1738
## Validation Metrics
- Loss: 0.369
- Accuracy: 0.846
- Precision: 0.802
- Recall: 0.833
- AUC: 0.901
- F1: 0.817
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363550
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363550", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363550", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Nokia/nlgp-natural | Nokia | 2022-10-24T08:41:09Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"code completion",
"code generation",
"en",
"code",
"arxiv:2108.05198",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | ---
language:
- en
- code
tags:
- code completion
- code generation
license: "apache-2.0"
---
# NLGP natural model
The NLGP natural model was introduced in the paper [Natural Language-Guided Programming](https://arxiv.org/abs/2108.05198). The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language **intent** in a certain code **context** (see the example below). This work was carried out by a research team in Nokia Bell Labs.
**Context**
```py
import matplotlib.pyplot as plt
values = [1, 2, 3, 4]
labels = ["a", "b", "c", "d"]
```
**Intent**
```py
# plot a bar chart
```
**Prediction**
```py
plt.bar(labels, values)
plt.show()
```
## Usage
```py
import re
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
# load the model
tok = GPT2TokenizerFast.from_pretrained("Nokia/nlgp-natural")
model = GPT2LMHeadModel.from_pretrained("Nokia/nlgp-natural")
# preprocessing functions
num_spaces = [2, 4, 6, 8, 10, 12, 14, 16, 18]
def preprocess(context, query):
"""
Encodes context + query as a single string and
replaces whitespace with special tokens <|2space|>, <|4space|>, ...
"""
input_str = f"{context}\n{query} <|endofcomment|>\n"
indentation_symbols = {n: f"<|{n}space|>" for n in num_spaces}
m = re.match("^[ ]+", input_str)
if not m:
return input_str
leading_whitespace = m.group(0)
N = len(leading_whitespace)
for n in self.num_spaces:
leading_whitespace = leading_whitespace.replace(n * " ", self.indentation_symbols[n])
return leading_whitespace + input_str[N:]
detokenize_pattern = re.compile(fr"<\|(\d+)space\|>")
def postprocess(output):
output = output.split("<|cell|>")[0]
def insert_space(m):
num_spaces = int(m.group(1))
return num_spaces * " "
return detokenize_pattern.sub(insert_space, output)
# inference
code_context = """
import matplotlib.pyplot as plt
values = [1, 2, 3, 4]
labels = ["a", "b", "c", "d"]
"""
query = "# plot a bar chart"
input_str = preprocess(code_context, query)
input_ids = tok(input_str, return_tensors="pt").input_ids
max_length = 150 # don't generate output longer than this length
total_max_length = min(1024 - input_ids.shape[-1], input_ids.shape[-1] + 150) # total = input + output
input_and_output = model.generate(
input_ids=input_ids,
max_length=total_max_length,
min_length=10,
do_sample=False,
num_beams=4,
early_stopping=True,
eos_token_id=tok.encode("<|cell|>")[0]
)
output = input_and_output[:, input_ids.shape[-1]:] # remove the tokens that correspond to the input_str
output_str = tok.decode(output[0])
postprocess(output_str)
```
## License and copyright
Copyright 2021 Nokia
Licensed under the Apache License 2.0
SPDX-License-Identifier: Apache-2.0 |
kimsiun/kaers-bert | kimsiun | 2022-10-24T08:18:46Z | 0 | 0 | null | [
"pytorch",
"license:mit",
"region:us"
]
| null | 2022-10-24T07:57:35Z | ---
license: mit
---
# KAERS-BERT - KoBERT + KAERS BERT Model
The Publicly Available KAERS BERT Embeddings paper contains the unique KAERS-BERT model: initialized with KoBERT (skt/kobert-base-v1) & trained on adverse events (ADEs) narratives reported through KAERS (Korean Adverse Event Reporting System).
This model card describes the KoBERT model.
## Pretraining Data
The KAERS-BERT model was trained on 1.2 million ADE narratives reported through KAERS between January 1, 2015 and December 31, 2019. The ADE narratives used for pertaining were mainly written in Korean.
## Model Pretraining
### Note Preprocessing
We only used ADE narratives reported as 'disease history in detail', 'adverse event in detail', and 'laboratory test in detail' for model pertaining, because ADE narratives of '(original) reporter's opinion' were highly redundant.
|
Nicole1228/ddpm-butterflies-128 | Nicole1228 | 2022-10-24T08:11:27Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-10-24T06:53:59Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Nicole1228/ddpm-butterflies-128/tensorboard?#scalars)
|
cyberagent/xlm-roberta-large-jnli-jsick | cyberagent | 2022-10-24T07:14:03Z | 180 | 6 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"nli",
"ja",
"dataset:jnli",
"dataset:jsick",
"license:cc-by-4.0",
"region:us"
]
| null | 2022-10-24T07:08:04Z | ---
language: ja
license: cc-by-4.0
library_name: sentence-transformers
tags:
- xlm-roberta
- nli
datasets:
- jnli
- jsick
---
# Japanese Natural Language Inference Model
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class, [gradient accumulation PR](https://github.com/UKPLab/sentence-transformers/pull/1092), and the code from [CyberAgentAILab/japanese-nli-model](https://github.com/CyberAgentAILab/japanese-nli-model).
## Training Data
The model was trained on the [JGLUE-JNLI](https://github.com/yahoojapan/JGLUE) and [JSICK](https://github.com/verypluming/JSICK) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('cyberagent/xlm-roberta-large-jnli-jsick')
model = AutoModelForSequenceClassification.from_pretrained('cyberagent/xlm-roberta-large-jnli-jsick')
features = tokenizer(["子供が走っている猫を見ている", "猫が走っている"], ["猫が走っている", "子供が走っている"], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
|
teacookies/autotrain-24102022-cert2-1856563478 | teacookies | 2022-10-24T04:33:47Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-24102022-cert2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-24T04:22:25Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-24102022-cert2
co2_eq_emissions:
emissions: 16.894326665784842
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1856563478
- CO2 Emissions (in grams): 16.8943
## Validation Metrics
- Loss: 0.004
- Accuracy: 0.999
- Precision: 0.961
- Recall: 0.974
- F1: 0.968
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-24102022-cert2-1856563478
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-24102022-cert2-1856563478", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-24102022-cert2-1856563478", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
0xrushi/TestPlaygroundSkops | 0xrushi | 2022-10-24T03:48:58Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-16T01:13:19Z | ---
license: mit
---
# Model description 1
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('transformation', ColumnTransformer(transformers=[('loading_missing_value_imputer',<br /> SimpleImputer(), ['loading']),<br /> ('numerical_missing_value_imputer',<br /> SimpleImputer(),<br /> ['loading', 'measurement_3', 'measurement_4',<br /> 'measurement_5', 'measurement_6',<br /> 'measurement_7', 'measurement_8',<br /> 'measurement_9', 'measurement_10',<br /> 'measurement_11', 'measurement_12',<br /> 'measurement_13', 'measurement_14',<br /> 'measurement_15', 'measurement_16',<br /> 'measurement_17']),<br /> ('attribute_0_encoder', OneHotEncoder(),<br /> ['attribute_0']),<br /> ('attribute_1_encoder', OneHotEncoder(),<br /> ['attribute_1']),<br /> ('product_code_encoder', OneHotEncoder(),<br /> ['product_code'])])), ('model', DecisionTreeClassifier(max_depth=4))] |
| verbose | False |
| transformation | ColumnTransformer(transformers=[('loading_missing_value_imputer',<br /> SimpleImputer(), ['loading']),<br /> ('numerical_missing_value_imputer',<br /> SimpleImputer(),<br /> ['loading', 'measurement_3', 'measurement_4',<br /> 'measurement_5', 'measurement_6',<br /> 'measurement_7', 'measurement_8',<br /> 'measurement_9', 'measurement_10',<br /> 'measurement_11', 'measurement_12',<br /> 'measurement_13', 'measurement_14',<br /> 'measurement_15', 'measurement_16',<br /> 'measurement_17']),<br /> ('attribute_0_encoder', OneHotEncoder(),<br /> ['attribute_0']),<br /> ('attribute_1_encoder', OneHotEncoder(),<br /> ['attribute_1']),<br /> ('product_code_encoder', OneHotEncoder(),<br /> ['product_code'])]) |
| model | DecisionTreeClassifier(max_depth=4) |
| transformation__n_jobs | |
| transformation__remainder | drop |
| transformation__sparse_threshold | 0.3 |
| transformation__transformer_weights | |
| transformation__transformers | [('loading_missing_value_imputer', SimpleImputer(), ['loading']), ('numerical_missing_value_imputer', SimpleImputer(), ['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']), ('attribute_0_encoder', OneHotEncoder(), ['attribute_0']), ('attribute_1_encoder', OneHotEncoder(), ['attribute_1']), ('product_code_encoder', OneHotEncoder(), ['product_code'])] |
| transformation__verbose | False |
| transformation__verbose_feature_names_out | True |
| transformation__loading_missing_value_imputer | SimpleImputer() |
| transformation__numerical_missing_value_imputer | SimpleImputer() |
| transformation__attribute_0_encoder | OneHotEncoder() |
| transformation__attribute_1_encoder | OneHotEncoder() |
| transformation__product_code_encoder | OneHotEncoder() |
| transformation__loading_missing_value_imputer__add_indicator | False |
| transformation__loading_missing_value_imputer__copy | True |
| transformation__loading_missing_value_imputer__fill_value | |
| transformation__loading_missing_value_imputer__missing_values | nan |
| transformation__loading_missing_value_imputer__strategy | mean |
| transformation__loading_missing_value_imputer__verbose | 0 |
| transformation__numerical_missing_value_imputer__add_indicator | False |
| transformation__numerical_missing_value_imputer__copy | True |
| transformation__numerical_missing_value_imputer__fill_value | |
| transformation__numerical_missing_value_imputer__missing_values | nan |
| transformation__numerical_missing_value_imputer__strategy | mean |
| transformation__numerical_missing_value_imputer__verbose | 0 |
| transformation__attribute_0_encoder__categories | auto |
| transformation__attribute_0_encoder__drop | |
| transformation__attribute_0_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_0_encoder__handle_unknown | error |
| transformation__attribute_0_encoder__sparse | True |
| transformation__attribute_1_encoder__categories | auto |
| transformation__attribute_1_encoder__drop | |
| transformation__attribute_1_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_1_encoder__handle_unknown | error |
| transformation__attribute_1_encoder__sparse | True |
| transformation__product_code_encoder__categories | auto |
| transformation__product_code_encoder__drop | |
| transformation__product_code_encoder__dtype | <class 'numpy.float64'> |
| transformation__product_code_encoder__handle_unknown | error |
| transformation__product_code_encoder__sparse | True |
| model__ccp_alpha | 0.0 |
| model__class_weight | |
| model__criterion | gini |
| model__max_depth | 4 |
| model__max_features | |
| model__max_leaf_nodes | |
| model__min_impurity_decrease | 0.0 |
| model__min_samples_leaf | 1 |
| model__min_samples_split | 2 |
| model__min_weight_fraction_leaf | 0.0 |
| model__random_state | |
| model__splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 {color: black;background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 pre{padding: 0;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-toggleable {background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator:hover {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-item {z-index: 1;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item:only-child::after {width: 0;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-text-repr-fallback {display: none;}</style><div id="sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="f3a0413c-728e-4fd9-bbd8-5c6ec5312931" type="checkbox" ><label for="f3a0413c-728e-4fd9-bbd8-5c6ec5312931" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="3f892f74-5115-4ab0-9c64-f760f11a7cbe" type="checkbox" ><label for="3f892f74-5115-4ab0-9c64-f760f11a7cbe" class="sk-toggleable__label sk-toggleable__label-arrow">transformation: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(), ['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3', 'measurement_4','measurement_5', 'measurement_6','measurement_7', 'measurement_8','measurement_9', 'measurement_10','measurement_11', 'measurement_12','measurement_13', 'measurement_14','measurement_15', 'measurement_16','measurement_17']),('attribute_0_encoder', OneHotEncoder(),['attribute_0']),('attribute_1_encoder', OneHotEncoder(),['attribute_1']),('product_code_encoder', OneHotEncoder(),['product_code'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ec9bebf9-8c02-4785-974c-0e727c4449c0" type="checkbox" ><label for="ec9bebf9-8c02-4785-974c-0e727c4449c0" class="sk-toggleable__label sk-toggleable__label-arrow">loading_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="572cc9df-a4bb-49b4-b730-d012d99ba876" type="checkbox" ><label for="572cc9df-a4bb-49b4-b730-d012d99ba876" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c6058039-3e65-4724-ad03-96517a382ad6" type="checkbox" ><label for="c6058039-3e65-4724-ad03-96517a382ad6" class="sk-toggleable__label sk-toggleable__label-arrow">numerical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="d385b0fd-dfaf-490c-8fda-dc024393a022" type="checkbox" ><label for="d385b0fd-dfaf-490c-8fda-dc024393a022" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="54db5302-69ab-49a1-b939-cb94c0958ab3" type="checkbox" ><label for="54db5302-69ab-49a1-b939-cb94c0958ab3" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_0_encoder</label><div class="sk-toggleable__content"><pre>['attribute_0']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c0a718c8-7093-4d45-85ae-847bfac3ec7e" type="checkbox" ><label for="c0a718c8-7093-4d45-85ae-847bfac3ec7e" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="993a1233-2b0d-473e-9bb3-f7c9d0bc654a" type="checkbox" ><label for="993a1233-2b0d-473e-9bb3-f7c9d0bc654a" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_1_encoder</label><div class="sk-toggleable__content"><pre>['attribute_1']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="4311756e-5a71-45ce-9005-a1e5448b1c30" type="checkbox" ><label for="4311756e-5a71-45ce-9005-a1e5448b1c30" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="9bfb54df-7509-4669-b6e7-db3520c2d1c4" type="checkbox" ><label for="9bfb54df-7509-4669-b6e7-db3520c2d1c4" class="sk-toggleable__label sk-toggleable__label-arrow">product_code_encoder</label><div class="sk-toggleable__content"><pre>['product_code']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="1acc88d7-a436-40f6-99a3-ebfbbc9f897a" type="checkbox" ><label for="1acc88d7-a436-40f6-99a3-ebfbbc9f897a" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="5626883d-68bc-41b4-8913-23b6aed62eb8" type="checkbox" ><label for="5626883d-68bc-41b4-8913-23b6aed62eb8" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(max_depth=4)</pre></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
# How to Get Started with the Model
Use the code below to get started with the model.
```python
[More Information Needed]
```
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
# h1
tjos osmda
```
# Model 2 Description (Logistic)
---
license: mit
---
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-------------------|-----------|
| C | 1.0 |
| class_weight | |
| dual | False |
| fit_intercept | True |
| intercept_scaling | 1 |
| l1_ratio | |
| max_iter | 100 |
| multi_class | auto |
| n_jobs | |
| penalty | l2 |
| random_state | 0 |
| solver | liblinear |
| tol | 0.0001 |
| verbose | 0 |
| warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 {color: black;background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 pre{padding: 0;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-toggleable {background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator:hover {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-item {z-index: 1;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item:only-child::after {width: 0;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-text-repr-fallback {display: none;}</style><div id="sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>LogisticRegression(random_state=0, solver='liblinear')</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="51d3cd4d-ea90-43e3-8d6a-5abc1df508b6" type="checkbox" checked><label for="51d3cd4d-ea90-43e3-8d6a-5abc1df508b6" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(random_state=0, solver='liblinear')</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
| accuracy | 0.96 |
| f1 score | 0.96 |
# How to Get Started with the Model
Use the code below to get started with the model.
```python
[More Information Needed]
```
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# Additional Content
## confusion_matrix
 |
CVPR/FSPBT | CVPR | 2022-10-24T03:26:33Z | 0 | 1 | PyTorch Lightning | [
"PyTorch Lightning",
"Image Translation",
"license:mit",
"region:us"
]
| null | 2022-06-15T01:14:32Z | ---
license: mit
library_name: PyTorch Lightning
tags:
- Image Translation
---
## Model Details
This model is from [FSPBT-Image-Translation](https://github.com/rnwzd/FSPBT-Image-Translation)
## Citation Information
```bibtex
@Article{Texler20-SIG,
author = "Ond\v{r}ej Texler and David Futschik and Michal Ku\v{c}era and Ond\v{r}ej Jamri\v{s}ka and \v{S}\'{a}rka Sochorov\'{a} and Menglei Chai and Sergey Tulyakov and Daniel S\'{y}kora",
title = "Interactive Video Stylization Using Few-Shot Patch-Based Training",
journal = "ACM Transactions on Graphics",
volume = "39",
number = "4",
pages = "73",
year = "2020",
}
``` |
declare-lab/dialect | declare-lab | 2022-10-24T02:32:35Z | 8 | 6 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2210.02890",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-24T02:28:19Z | ---
license: mit
widget:
- text: "What is or could be the cause of target? <sep> target: Thanks. Will I be able to take a retest ? <sep> context: A: Did I do well on my test ?, <utt> B: Do you want to know the honest answer ?, <utt> A: Why wouldn't I want to know ?, <utt> B: You had pretty bad scores ., <utt> A: Exactly what do you mean by bad ?, <utt> B: You failed ., <utt> A: How'd I fail it ?, <utt> B: There are a couple of reasons why you didn't pass ., <utt> A: What did I do wrong ?, <utt> B: To sum it all up , you really just don't know how to drive ., <utt> A: Thanks. Will I be able to take a retest ?, <utt> B: Sure you can , in about two and a half weeks . "
example_title: "Cause 1"
- text: "What is or could be the cause of target? <sep> target: But she did and made me disappointed . <sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That's a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it "
example_title: "Cause 2"
- text: "What subsequent event happens or could happen following the target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it "
example_title: "Subsequent Event 1"
- text: "What subsequent event happens or could happen following the target? <sep> target: Sure you can , in about two and a half weeks . <sep> context: A: Did I do well on my test ?, <utt> B: Do you want to know the honest answer ?, <utt> A: Why wouldn't I want to know ?, <utt> B: You had pretty bad scores ., <utt> A: Exactly what do you mean by bad ?, <utt> B: You failed ., <utt> A: How'd I fail it ?, <utt> B: There are a couple of reasons why you didn't pass ., <utt> A: What did I do wrong ?, <utt> B: To sum it all up , you really just don't know how to drive ., <utt> A: Thanks. Will I be able to take a retest ?, <utt> B: Sure you can , in about two and a half weeks . "
example_title: "Subsequent Event 2"
- text: "What is the possible emotional reaction of the listener in response to target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it "
example_title: "Emotional Reaction"
- text: "What is or could be the motivation of target? <sep> target: Sure you can , in about two and a half weeks . <sep> context: A: Did I do well on my test ?, <utt> B: Do you want to know the honest answer ?, <utt> A: Why wouldn't I want to know ?, <utt> B: You had pretty bad scores ., <utt> A: Exactly what do you mean by bad ?, <utt> B: You failed ., <utt> A: How'd I fail it ?, <utt> B: There are a couple of reasons why you didn't pass ., <utt> A: What did I do wrong ?, <utt> B: To sum it all up , you really just don't know how to drive ., <utt> A: Thanks. Will I be able to take a retest ?, <utt> B: Sure you can , in about two and a half weeks . "
example_title: "Motivation"
---
## DIALogue-level Commonsense Transformer (DIALeCT)
The pretrained checkpoint for the paper [Multiview Contextual Commonsense Inference: A New Dataset and Task](https://arxiv.org/abs/2210.02890).
The model is trained based on the [T5-large](https://huggingface.co/t5-large) checkpoint.

## Datasets
The dataset used to pretrain the model can be obtained from the [CICERO repo](https://github.com/declare-lab/CICERO) following instructions. The Contextualized Commonsense Inference in Dialogues v2 (CICEROv2) consists of annotated commonsense inferences including cause and emotional reaction, etc. The dialogues are from multiple datasets.
| Dataset | #Dialogues| #Instances|
| -------- | ----- | --------- |
| DailyDialog| 1118| 3973|
| MuTual| 1011 | 3384|
| Dream| 250 | 994|
### Examples
Some examples of generated results from the pretrained model (the zero-shot setting).
**Subsequent Event**
```
What is or could be the subsequent event of the target? <sep>
target: Oh . I just can't forget it .<sep>
context: A: David , why didn't you clean the room ?, <utt>
B: I'm not in the mood ., <utt>
A: Why are you feeling depressed ?, <utt>
B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt>
A: I don t think she will do such a thing ., <utt>
B: But she did and made me disappointed ., <utt>
A: Oh , cheer up . A girlfriend is not everything ., <utt>
B: But she means a lot to me ., <utt>
A: Then forgive her mistake ., <utt>
B: Oh . I just can't forget it
```
Predicted subsequent event:
```
David's girlfriend apologized to david for her mistake.
```
**Cause**
```
What is or could be the cause of target? <sep>
target: Thanks. Will I be able to take a retest ? <sep>
context: A: Did I do well on my test ?, <utt>
B: Do you want to know the honest answer ?, <utt>
A: Why wouldn't I want to know ?, <utt>
B: You had pretty bad scores ., <utt>
A: Exactly what do you mean by bad ?, <utt>
B: You failed ., <utt>
A: How'd I fail it ?, <utt>
B: There are a couple of reasons why you didn't pass ., <utt>
A: What did I do wrong ?, <utt>
B: To sum it all up , you really just don't know how to drive ., <utt>
A: Thanks. Will I be able to take a retest ?, <utt>
B: Sure you can , in about two and a half weeks .
```
Predicted cause:
```
The speaker has failed the driving test.
```
**Emotional Reaction**
```
What is the possible emotional reaction of the listener in response to target? <sep>
target: Oh . I just can't forget it .<sep>
context: A: David , why didn't you clean the room ?, <utt>
B: I'm not in the mood ., <utt>
A: Why are you feeling depressed ?, <utt>
B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt>
A: I don t think she will do such a thing ., <utt>
B: But she did and made me disappointed ., <utt>
A: Oh , cheer up . A girlfriend is not everything ., <utt>
B: But she means a lot to me ., <utt>
A: Then forgive her mistake ., <utt>
B: Oh . I just can't forget it
```
Predicted emotional reaction:
```
The listener is hopeful that david will forgive his girlfriend for her mistake.
```
## Inference:
The input text should be formatted as follows:
```
Question <sep> target: target_utt <sep> context: A: utterance 1 <utt> B: utterance 2 <utt> A: utterance 3 <utt> B: utterance 4
```
Question: The question against which we want to make the inference.
A, B are speaker identifiers
The ```target_utt``` should be anyone between ```utterance 1, utterance 2, utterance 3, or utterance 4```. Do not use the speaker identifier in the ```target_utt```
Some samples are provided in the Hosted inference API box examples.
## BibTeX entry and citation info
If you use the model, you can cite:
```bibtex
@article{Shen2022MultiviewCC,
title={Multiview Contextual Commonsense Inference: A New Dataset and Task},
author={Siqi Shen and Deepanway Ghosal and Navonil Majumder and Henry Lim and Rada Mihalcea and Soujanya Poria},
journal={ArXiv},
year={2022},
volume={abs/2210.02890}
}
``` |
TTian/bert-finetuned-feedback-classifier | TTian | 2022-10-24T02:19:29Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T02:19:20Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: bert-finetuned-feedback-classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-feedback-classifier
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8251
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.8251 | 0 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
theojolliffe/bart-large-cnn-finetuned-roundup | theojolliffe | 2022-10-23T23:51:01Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-23T15:16:53Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8956
- Rouge1: 58.1914
- Rouge2: 45.822
- Rougel: 49.4407
- Rougelsum: 56.6379
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.2575 | 1.0 | 795 | 0.9154 | 53.8792 | 34.3203 | 35.8768 | 51.1789 | 142.0 |
| 0.7053 | 2.0 | 1590 | 0.7921 | 54.3918 | 35.3346 | 37.7539 | 51.6989 | 142.0 |
| 0.5379 | 3.0 | 2385 | 0.7566 | 52.1651 | 32.5699 | 36.3105 | 49.3327 | 141.5185 |
| 0.3496 | 4.0 | 3180 | 0.7584 | 54.3258 | 36.403 | 39.6938 | 52.0186 | 142.0 |
| 0.2688 | 5.0 | 3975 | 0.7343 | 55.9101 | 39.0709 | 42.4138 | 53.572 | 141.8333 |
| 0.1815 | 6.0 | 4770 | 0.7924 | 53.9272 | 36.8138 | 40.0614 | 51.7496 | 142.0 |
| 0.1388 | 7.0 | 5565 | 0.7674 | 55.0347 | 38.7978 | 42.0081 | 53.0297 | 142.0 |
| 0.1048 | 8.0 | 6360 | 0.7700 | 55.2993 | 39.4075 | 42.6837 | 53.5179 | 141.9815 |
| 0.0808 | 9.0 | 7155 | 0.7796 | 56.1508 | 40.0863 | 43.2178 | 53.7908 | 142.0 |
| 0.0719 | 10.0 | 7950 | 0.8057 | 56.2302 | 41.3004 | 44.7921 | 54.4304 | 142.0 |
| 0.0503 | 11.0 | 8745 | 0.8259 | 55.7603 | 41.0643 | 44.5518 | 54.2305 | 142.0 |
| 0.0362 | 12.0 | 9540 | 0.8604 | 55.8612 | 41.5984 | 44.444 | 54.2493 | 142.0 |
| 0.0307 | 13.0 | 10335 | 0.8516 | 57.7259 | 44.542 | 47.6724 | 56.0166 | 142.0 |
| 0.0241 | 14.0 | 11130 | 0.8826 | 56.7943 | 43.7139 | 47.2866 | 55.1824 | 142.0 |
| 0.0193 | 15.0 | 11925 | 0.8856 | 57.4135 | 44.3147 | 47.9136 | 55.8843 | 142.0 |
| 0.0154 | 16.0 | 12720 | 0.8956 | 58.1914 | 45.822 | 49.4407 | 56.6379 | 142.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/16pxl | huggingtweets | 2022-10-23T23:23:51Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-23T23:21:33Z | ---
language: en
thumbnail: http://www.huggingtweets.com/16pxl/1666567427101/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1358468632255156224/JtUkil_x_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jubilee ❣️ 2023 CALENDARS OUT NOW</div>
<div style="text-align: center; font-size: 14px;">@16pxl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jubilee ❣️ 2023 CALENDARS OUT NOW.
| Data | Jubilee ❣️ 2023 CALENDARS OUT NOW |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 288 |
| Short tweets | 228 |
| Tweets kept | 2713 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3r6vcjy6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @16pxl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wix5go1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wix5go1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/16pxl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cj7s1/DialoGPT-large-BMO | cj7s1 | 2022-10-23T22:48:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-23T22:45:03Z | DialoGPT-large trained on all publicly available adventure time transcripts made to be a conversational model based on the character BMO. |
huggingtweets/civickey | huggingtweets | 2022-10-23T21:52:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-23T21:49:23Z | ---
language: en
thumbnail: http://www.huggingtweets.com/civickey/1666561923663/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1558147004332589057/N-sz3RQY_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Civic | Breakpoint & Hacker House Lisbon, Nov. 1-7</div>
<div style="text-align: center; font-size: 14px;">@civickey</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Civic | Breakpoint & Hacker House Lisbon, Nov. 1-7.
| Data | Civic | Breakpoint & Hacker House Lisbon, Nov. 1-7 |
| --- | --- |
| Tweets downloaded | 2355 |
| Retweets | 672 |
| Short tweets | 98 |
| Tweets kept | 1585 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kwud4scb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @civickey's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1uif1uqj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1uif1uqj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/civickey')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Manu271/ppo-LunarLander-v2 | Manu271 | 2022-10-23T19:51:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-23T19:51:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.76 +/- 21.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pepa/roberta-base-fever | pepa | 2022-10-23T18:34:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:copenlu/fever_gold_evidence",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-23T18:33:10Z | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-base-fever
results: []
datasets:
- copenlu/fever_gold_evidence
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-fever
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6929
- eval_p: 0.8856
- eval_r: 0.8851
- eval_f1: 0.8848
- eval_runtime: 44.4077
- eval_samples_per_second: 423.462
- eval_steps_per_second: 52.941
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
NikitaBaramiia/q-Taxi-v3 | NikitaBaramiia | 2022-10-23T18:07:23Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-23T17:48:05Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="NikitaBaramiia/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
NikitaBaramiia/q-FrozenLake-v1-4x4-noSlippery | NikitaBaramiia | 2022-10-23T18:04:10Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-23T17:45:53Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NikitaBaramiia/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
artificialguybr/WikiHowSDModel | artificialguybr | 2022-10-23T17:56:39Z | 0 | 8 | null | [
"license:openrail",
"region:us"
]
| null | 2022-10-23T06:44:30Z | ---
license: openrail
---
This model card is a copy-paste from https://www.reddit.com/r/StableDiffusion/comments/ybavif/wikihow_db_model_entirely_free_model_trained_with/
The template is not 100% accurate and sometimes creates erroneous images, but it is incomparable to the natural quality of SD.
The images used for training were all CC from Wikihow. Template available on Hugging Face.
The trigger word for traditional Embeddings is the filename.
The Traditional Embeddings were split into two rar files: One with 0.005 training and the other with 0.00005 training. All with 20 images and 2000 Steps. The two rar files, plus the Embedding file still have the images for you to evaluate which one you want to use.
There is the Winrar file Embedding Aesthestics which is what the name says.
To activate the Dreambooth you must write in the PROMPT: '' in WKHW1 Beautiful Art Style''.
Test which combination works for you. Model + Aesthestics. Model without aesthestics. Model with Embedding. Model without Embedding.:
All my templates are 100% free. All my models are 100% free. You can check in my profile my Coloring Book model posted 12 hours ago.
You can contribute on Patreon and Buymeacoffe. ALL money raised will go towards buying GPU/Rent hours and paying Colab to bring in better models.
I plan to bring Dreambooth, TI, and Hypernetworks models. However, my Hypernetworks is still defective and I am trying to fix it.
If you want any specific models you can contact me here and send me pictures and where I can find the datasets. |
pepa/roberta-small-fever | pepa | 2022-10-23T17:53:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:copenlu/fever_gold_evidence",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-23T17:27:02Z | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-small-fever
results: []
datasets:
- copenlu/fever_gold_evidence
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-small-fever
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6096
- eval_p: 0.8179
- eval_r: 0.8110
- eval_f1: 0.8104
- eval_runtime: 36.258
- eval_samples_per_second: 518.644
- eval_steps_per_second: 64.841
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
sd-concepts-library/xioboma | sd-concepts-library | 2022-10-23T17:51:13Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-23T17:51:03Z | ---
license: mit
---
### xioboma on Stable Diffusion
This is the `<xi-obama>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
patrickvonplaten/carol_model | patrickvonplaten | 2022-10-23T17:49:06Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-23T17:56:14Z | ---
license: mit
---
### Carol on Stable Diffusion
This is the `<carol>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`.
|
valhalla/SwinIR-real-sr-L-x4-PSNR | valhalla | 2022-10-23T17:44:46Z | 2 | 0 | transformers | [
"transformers",
"jax",
"swin-ir",
"region:us"
]
| null | 2022-10-23T15:43:58Z | ---
tags:
- swin-ir
inference: false
--- |
valhalla/SwinIR-real-sr-M-x4-PSNR | valhalla | 2022-10-23T17:44:14Z | 1 | 0 | transformers | [
"transformers",
"jax",
"swin-ir",
"region:us"
]
| null | 2022-10-23T15:44:44Z | ---
tags:
- swin-ir
inference: false
--- |
ddebnath/layoutlmv3-finetuned-invoice | ddebnath | 2022-10-23T17:42:31Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:generated",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-23T17:11:50Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- generated
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-invoice
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: generated
type: generated
config: sroie
split: train
args: sroie
metrics:
- name: Precision
type: precision
value: 0.9959514170040485
- name: Recall
type: recall
value: 0.9979716024340771
- name: F1
type: f1
value: 0.9969604863221885
- name: Accuracy
type: accuracy
value: 0.9995786812723826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the generated dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0028
- Precision: 0.9960
- Recall: 0.9980
- F1: 0.9970
- Accuracy: 0.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.0 | 100 | 0.0502 | 0.97 | 0.9838 | 0.9768 | 0.9968 |
| No log | 4.0 | 200 | 0.0194 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 6.0 | 300 | 0.0160 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 8.0 | 400 | 0.0123 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.053 | 10.0 | 500 | 0.0089 | 0.9757 | 0.9757 | 0.9757 | 0.9966 |
| 0.053 | 12.0 | 600 | 0.0058 | 0.9959 | 0.9919 | 0.9939 | 0.9992 |
| 0.053 | 14.0 | 700 | 0.0046 | 0.9939 | 0.9919 | 0.9929 | 0.9989 |
| 0.053 | 16.0 | 800 | 0.0037 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.053 | 18.0 | 900 | 0.0068 | 0.9959 | 0.9878 | 0.9919 | 0.9987 |
| 0.0057 | 20.0 | 1000 | 0.0054 | 0.9919 | 0.9959 | 0.9939 | 0.9992 |
| 0.0057 | 22.0 | 1100 | 0.0057 | 0.9919 | 0.9959 | 0.9939 | 0.9992 |
| 0.0057 | 24.0 | 1200 | 0.0049 | 0.9919 | 0.9959 | 0.9939 | 0.9992 |
| 0.0057 | 26.0 | 1300 | 0.0052 | 0.9919 | 0.9959 | 0.9939 | 0.9992 |
| 0.0057 | 28.0 | 1400 | 0.0030 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0022 | 30.0 | 1500 | 0.0028 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0022 | 32.0 | 1600 | 0.0030 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0022 | 34.0 | 1700 | 0.0030 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0022 | 36.0 | 1800 | 0.0037 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0022 | 38.0 | 1900 | 0.0037 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0017 | 40.0 | 2000 | 0.0037 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ivanwong/AnyML | ivanwong | 2022-10-23T17:38:38Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-10-23T17:28:45Z | AnyML model is a YOLOv5 customized model. It is based on the Ultraistic YOLOv5. The based model was pre-trained with COCO with 80 classes.
|
srSergio/bakerzduzen-artstyle | srSergio | 2022-10-23T17:33:03Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2022-10-23T17:33:03Z | ---
license: creativeml-openrail-m
---
|
pepa/bigbird-roberta-base-snli | pepa | 2022-10-23T17:11:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"big_bird",
"text-classification",
"generated_from_trainer",
"dataset:snli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-23T17:11:06Z | ---
tags:
- generated_from_trainer
datasets:
- snli
model-index:
- name: bigbird-roberta-base-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-roberta-base-snli
This model was trained from scratch on the snli dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2738
- eval_p: 0.9034
- eval_r: 0.9033
- eval_f1: 0.9033
- eval_runtime: 10.9262
- eval_samples_per_second: 899.126
- eval_steps_per_second: 56.195
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
thothai/turkce-kufur-tespiti | thothai | 2022-10-23T16:55:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"license:afl-3.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-23T16:45:09Z | ---
license: afl-3.0
---
# Thoth Ai, Türkçe hakaret ve küfürleri tespit etmek için oluşturulmuştur. Akademik projelerde kaynak gösterilmesi halinde kullanılabilir.
## Validation Metrics
- Loss: 0.230
- Accuracy: 0.936
- Macro F1: 0.927
- Micro F1: 0.936
- Weighted F1: 0.936
- Macro Precision: 0.929
- Micro Precision: 0.936
- Weighted Precision: 0.936
- Macro Recall: 0.925
- Micro Recall: 0.936
- Weighted Recall: 0.936
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("thothai/turkce-kufur-tespiti", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("thothai/turkce-kufur-tespiti", use_auth_token=True)
inputs = tokenizer("Merhaba", return_tensors="pt")
outputs = model(**inputs)
``` |
crumb/dalle-paint | crumb | 2022-10-23T16:22:50Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-23T16:22:37Z | ---
license: mit
---
### dalle-paint on Stable Diffusion
This is the `<dalle-paint>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
|
ddebnath/layoutlmv3-finetuned-cord_100 | ddebnath | 2022-10-23T15:37:39Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-10-23T14:42:28Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: train
args: cord
metrics:
- name: Precision
type: precision
value: 0.9485842026825634
- name: Recall
type: recall
value: 0.9528443113772455
- name: F1
type: f1
value: 0.9507094846900671
- name: Accuracy
type: accuracy
value: 0.9592529711375212
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1978
- Precision: 0.9486
- Recall: 0.9528
- F1: 0.9507
- Accuracy: 0.9593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.56 | 250 | 0.9543 | 0.7832 | 0.8166 | 0.7996 | 0.8226 |
| 1.3644 | 3.12 | 500 | 0.5338 | 0.8369 | 0.8683 | 0.8523 | 0.8824 |
| 1.3644 | 4.69 | 750 | 0.3658 | 0.8840 | 0.9072 | 0.8955 | 0.9232 |
| 0.3802 | 6.25 | 1000 | 0.3019 | 0.9156 | 0.9251 | 0.9203 | 0.9334 |
| 0.3802 | 7.81 | 1250 | 0.2833 | 0.9094 | 0.9237 | 0.9165 | 0.9346 |
| 0.2061 | 9.38 | 1500 | 0.2241 | 0.9377 | 0.9469 | 0.9423 | 0.9525 |
| 0.2061 | 10.94 | 1750 | 0.2282 | 0.9304 | 0.9409 | 0.9356 | 0.9474 |
| 0.1416 | 12.5 | 2000 | 0.2017 | 0.9509 | 0.9566 | 0.9537 | 0.9610 |
| 0.1416 | 14.06 | 2250 | 0.2006 | 0.9472 | 0.9536 | 0.9504 | 0.9614 |
| 0.1056 | 15.62 | 2500 | 0.1978 | 0.9486 | 0.9528 | 0.9507 | 0.9593 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Subsets and Splits