modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 06:27:53
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
519 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 06:27:45
card
stringlengths
11
1.01M
esb/conformer-rnnt-voxpopuli
esb
2022-10-24T15:13:22Z
4
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:facebook/voxpopuli", "region:us" ]
null
2022-10-24T15:13:07Z
--- language: - en tags: - esb datasets: - esb/datasets - facebook/voxpopuli --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="voxpopuli" \ --output_dir="./" \ --run_name="conformer-rnnt-voxpopuli" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/conformer-rnnt-tedlium
esb
2022-10-24T15:11:08Z
4
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:LIUM/tedlium", "region:us" ]
null
2022-10-24T15:10:54Z
--- language: - en tags: - esb datasets: - esb/datasets - LIUM/tedlium --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="tedlium" \ --output_dir="./" \ --run_name="rnnt-tedlium-baseline" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/conformer-rnnt-librispeech
esb
2022-10-24T15:05:56Z
4
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:librispeech_asr", "region:us" ]
null
2022-10-24T15:05:41Z
--- language: - en tags: - esb datasets: - esb/datasets - librispeech_asr --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="librispeech" \ --output_dir="./" \ --run_name="conformer-rnnt-librispeech" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esb/whisper-aed-earnings22
esb
2022-10-24T14:55:59Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:revdotcom/earnings22", "region:us" ]
null
2022-10-24T14:55:42Z
--- language: - en tags: - esb datasets: - esb/datasets - revdotcom/earnings22 --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="earnings22" \ --max_steps="2500" \ --output_dir="./" \ --run_name="whisper-earnings22" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/whisper-aed-gigaspeech
esb
2022-10-24T14:50:45Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:speechcolab/gigaspeech", "region:us" ]
null
2022-10-24T14:50:28Z
--- language: - en tags: - esb datasets: - esb/datasets - speechcolab/gigaspeech --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="gigaspeech" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-gigaspeech" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/whisper-aed-common_voice
esb
2022-10-24T14:43:15Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:mozilla-foundation/common_voice_9_0", "region:us" ]
null
2022-10-24T14:42:58Z
--- language: - en tags: - esb datasets: - esb/datasets - mozilla-foundation/common_voice_9_0 --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="common_voice" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-common-voice" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --max_eval_duration_in_seconds="20" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esb/wav2vec2-aed-switchboard
esb
2022-10-24T14:35:43Z
3
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:ldc/switchboard", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T14:35:29Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/switchboard --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="switchboard" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-switchboard" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="2" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="260" \ --final_generation_num_beams="5" \ --generation_length_penalty="0.8" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esb/wav2vec2-aed-ami
esb
2022-10-24T14:33:44Z
4
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:edinburghcstr/ami", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T14:33:31Z
--- language: - en tags: - esb datasets: - esb/datasets - edinburghcstr/ami --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="ami" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-ami" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="225" \ --final_generation_num_beams="5" \ --generation_length_penalty="1.4" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esb/wav2vec2-aed-earnings22
esb
2022-10-24T14:31:37Z
5
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:revdotcom/earnings22", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T14:31:23Z
--- language: - en tags: - esb datasets: - esb/datasets - revdotcom/earnings22 --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="earnings22" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-earnings22" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="25" \ --max_steps="50000" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --generation_length_penalty="1.2" \ --final_generation_max_length="200" \ --final_generation_num_beams="5" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esb/wav2vec2-aed-gigaspeech
esb
2022-10-24T14:25:48Z
8
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:speechcolab/gigaspeech", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T14:25:35Z
--- language: - en tags: - esb datasets: - esb/datasets - speechcolab/gigaspeech --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="gigaspeech" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-gigaspeech" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="2" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="200" \ --final_generation_num_beams="14" \ --generation_length_penalty="1.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esb/wav2vec2-aed-voxpopuli
esb
2022-10-24T14:22:56Z
4
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:facebook/voxpopuli", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T14:22:42Z
--- language: - en tags: - esb datasets: - esb/datasets - facebook/voxpopuli --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="voxpopuli" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-voxpopuli" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="1" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="10001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="225" \ --final_generation_num_beams="5" \ --generation_length_penalty="0.8" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esb/wav2vec2-aed-common_voice
esb
2022-10-24T14:19:00Z
3
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:mozilla-foundation/common_voice_9_0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T14:18:46Z
--- language: - en tags: - esb datasets: - esb/datasets - mozilla-foundation/common_voice_9_0 --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="common_voice" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-common-voice" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="2" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="200" \ --generation_num_beams="14" \ --generation_length_penalty="1.2" \ --max_eval_duration_in_seconds="20" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esb/wav2vec2-ctc-switchboard
esb
2022-10-24T14:12:06Z
4
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:ldc/switchboard", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T14:11:58Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/switchboard --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-switchboard-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="switchboard" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-switchboard" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
esb/wav2vec2-ctc-earnings22
esb
2022-10-24T14:09:53Z
4
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:revdotcom/earnings22", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T14:09:46Z
--- language: - en tags: - esb datasets: - esb/datasets - revdotcom/earnings22 --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-earnings22-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="earnings22" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-earnings22" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
esb/wav2vec2-ctc-tedlium
esb
2022-10-24T13:59:30Z
3
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:LIUM/tedlium", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T13:59:22Z
--- language: - en tags: - esb datasets: - esb/datasets - LIUM/tedlium --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-tedlium-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="tedlium" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-tedlium" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
esb/wav2vec2-ctc-librispeech
esb
2022-10-24T13:56:59Z
3
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T13:56:52Z
--- language: - en tags: - esb datasets: - esb/datasets - librispeech_asr --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-librispeech-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="librispeech" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-librispeech" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
edbeeching/doom_battle2_3333
edbeeching
2022-10-24T13:19:21Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:18:51Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_battle2 type: doom_battle2 metrics: - type: mean_reward value: 47.23 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_battle2** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_health_gathering_supreme_3333
edbeeching
2022-10-24T13:17:54Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:17:29Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 66.00 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_health_gathering_3333
edbeeching
2022-10-24T13:17:16Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:16:50Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering type: doom_health_gathering metrics: - type: mean_reward value: 66.00 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_defend_the_line_3333
edbeeching
2022-10-24T13:16:37Z
2
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:16:09Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_defend_the_line type: doom_defend_the_line metrics: - type: mean_reward value: 37.00 +/- 3.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_defend_the_line** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_defend_the_center_3333
edbeeching
2022-10-24T13:15:57Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:15:33Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_defend_the_center type: doom_defend_the_center metrics: - type: mean_reward value: 24.00 +/- 1.41 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_defend_the_center** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_my_way_home_3333
edbeeching
2022-10-24T13:14:44Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:14:21Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_my_way_home type: doom_my_way_home metrics: - type: mean_reward value: 0.98 +/- 0.01 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_my_way_home** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_two_colors_easy_3333
edbeeching
2022-10-24T13:12:47Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:11:50Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_two_colors_easy type: doom_two_colors_easy metrics: - type: mean_reward value: 59.00 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_two_colors_easy** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_basic_3333
edbeeching
2022-10-24T13:11:37Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:11:12Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_basic type: doom_basic metrics: - type: mean_reward value: 0.77 +/- 0.12 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_basic** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_battle2_2222
edbeeching
2022-10-24T13:09:30Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:09:03Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_battle2 type: doom_battle2 metrics: - type: mean_reward value: 30.93 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_battle2** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_defend_the_center_2222
edbeeching
2022-10-24T13:06:16Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:05:53Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_defend_the_center type: doom_defend_the_center metrics: - type: mean_reward value: 25.00 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_defend_the_center** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_deadly_corridor_2222
edbeeching
2022-10-24T13:05:41Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:05:20Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_deadly_corridor type: doom_deadly_corridor metrics: - type: mean_reward value: 19.24 +/- 7.44 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_deadly_corridor** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_my_way_home_2222
edbeeching
2022-10-24T13:05:07Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:04:42Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_my_way_home type: doom_my_way_home metrics: - type: mean_reward value: 0.98 +/- 0.01 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_my_way_home** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_defend_the_center_flat_actions_2222
edbeeching
2022-10-24T13:04:29Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:04:03Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_defend_the_center_flat_actions type: doom_defend_the_center_flat_actions metrics: - type: mean_reward value: 24.67 +/- 0.47 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_defend_the_center_flat_actions** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_basic_2222
edbeeching
2022-10-24T13:02:36Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:02:10Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_basic type: doom_basic metrics: - type: mean_reward value: 0.76 +/- 0.11 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_basic** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_deathmatch_bots_1111
edbeeching
2022-10-24T13:01:50Z
1
1
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:01:26Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_deathmatch_bots type: doom_deathmatch_bots metrics: - type: mean_reward value: nan +/- nan name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_deathmatch_bots** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_duel_bots_1111
edbeeching
2022-10-24T13:01:10Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T13:00:43Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_duel_bots type: doom_duel_bots metrics: - type: mean_reward value: nan +/- nan name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_duel_bots** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_battle_1111
edbeeching
2022-10-24T12:59:47Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T12:59:20Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_battle type: doom_battle metrics: - type: mean_reward value: 60.15 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_battle** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_defend_the_center_1111
edbeeching
2022-10-24T12:58:26Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T12:58:04Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_defend_the_center type: doom_defend_the_center metrics: - type: mean_reward value: 24.67 +/- 0.47 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_defend_the_center** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/doom_basic_1111
edbeeching
2022-10-24T12:54:44Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T12:54:23Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_basic type: doom_basic metrics: - type: mean_reward value: 0.75 +/- 0.10 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_basic** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
sohamtiwari3120/bert-finetuned-ner
sohamtiwari3120
2022-10-24T12:41:38Z
10
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-23T06:05:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0589 - Overall Precision: 0.9362 - Overall Recall: 0.9500 - Overall F1: 0.9430 - Overall Accuracy: 0.9873 - Loc F1: 0.9616 - Misc F1: 0.8783 - Org F1: 0.9121 - Per F1: 0.9797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Loc F1 | Misc F1 | Org F1 | Per F1 | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------:|:-------:|:------:|:------:| | 0.0745 | 1.0 | 1756 | 0.0556 | 0.9183 | 0.9345 | 0.9263 | 0.9848 | 0.9501 | 0.8499 | 0.8775 | 0.9765 | | 0.0321 | 2.0 | 3512 | 0.0542 | 0.9346 | 0.9475 | 0.9410 | 0.9872 | 0.9618 | 0.8761 | 0.9073 | 0.9773 | | 0.0172 | 3.0 | 5268 | 0.0589 | 0.9362 | 0.9500 | 0.9430 | 0.9873 | 0.9616 | 0.8783 | 0.9121 | 0.9797 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu102 - Datasets 2.6.1 - Tokenizers 0.13.1
qanastek/FrenchMedMCQA-BioBERT-V1.1-Wikipedia-BM25
qanastek
2022-10-24T12:38:40Z
9
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "fr", "dataset:FrenchMedMCQA", "arxiv:1910.03771", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-21T11:30:34Z
--- language: fr datasets: - FrenchMedMCQA license: apache-2.0 model-index: - name: qanastek/FrenchMedMCQA-BioBERT-V1.1-Wikipedia-BM25 results: - task: type: question-answering name: Question Answering dataset: name: FrenchMedMCQA type: FrenchMedMCQA config: FrenchMedMCQA split: validation metrics: - name: Exact Match type: exact_match value: 16.72 verified: true - name: Hamming Score type: hamming score value: 38.72 verified: true widget: - text: "Quels sont les signes cliniques retrouvés dans l'intoxication par la digoxine ? : \n (A) Douleur oculaire (B) Troubles digestifs (C) BAV (D) Hallucinations (E) Hyperthermie\n Intoxication par les venins d'animaux" --- # FrenchMedMCQA : Multiple-choice question answering on pharmacology exams using BioBERT V1.1, Wikipedia external knowledge and BM25 retriever - Corpora: [FrenchMedMCQA](https://github.com/qanastek/FrenchMedMCQA) - Model: [BioBERT V1.1](https://huggingface.co/dmis-lab/biobert-v1.1) - Number of Epochs: 10 **People Involved** * [Yanis LABRAK](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1) * [Adrien BAZOGE](https://fr.linkedin.com/in/adrien-bazoge-6b511b145) (2) * [Richard DUFOUR](https://cv.archives-ouvertes.fr/richard-dufour) (2) * [Béatrice DAILLE](https://scholar.google.com/citations?user=-damXYEAAAAJ&hl=fr) (2) * [Pierre-Antoine GOURRAUD](https://fr.linkedin.com/in/pierre-antoine-gourraud-35779b6) (3) * [Emmanuel MORIN](https://scholar.google.fr/citations?user=tvTEtM0AAAAJ&hl=fr) (2) * [Mickael ROUVIER](https://scholar.google.fr/citations?user=0fmu-VsAAAAJ&hl=fr) (1) **Affiliations** 1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France. 2. [LS2N, TALN team](https://www.ls2n.fr/equipe/taln/), Nantes University, Nantes, France. 3. [CHU Nantes](https://www.chu-nantes.fr/), Nantes University, Nantes, France. ## Demo: How to use in HuggingFace Transformers Requires [Transformers](https://pypi.org/project/transformers/): ```pip install transformers``` ```python from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline path_model = "qanastek/FrenchMedMCQA-BioBERT-V1.1-Wikipedia-BM25" tokenizer = AutoTokenizer.from_pretrained(path_model) model = AutoModelForSequenceClassification.from_pretrained(path_model) pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=False, device=0) # GPU dataset = load_dataset("qanastek/FrenchMedMCQA")["test"] for e in dataset: prediction = pipeline(e["bert_text"], truncation=True, max_length=model.config.max_position_embeddings) ``` Output: ![Preview Output](preview.PNG) ## Training data The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers. | # Answers | Training | Validation | Test | Total | |:---------:|:--------:|:----------:|:----:|:-----:| | 1 | 595 | 164 | 321 | 1,080 | | 2 | 528 | 45 | 97 | 670 | | 3 | 718 | 71 | 141 | 930 | | 4 | 296 | 30 | 56 | 382 | | 5 | 34 | 2 | 7 | 43 | | Total | 2171 | 312 | 622 | 3,105 | ## Evaluation results The test corpora used for this evaluation is available on [Github](https://github.com/qanastek/FrenchMedMCQA). | Architecture | Hamming | EMR | Hamming | EMR | Hamming | EMR | Hamming | EMR | Hamming | EMR | |:----------------:|:-------:|:-----:|:-------:|:-----:|:-------:|:-----:|:-------:|:-----:|:-------:|:-----:| | BioBERT V1.1 | 36.19 | 15.43 | **38.72** | 16.72 | 33.33 | 14.14 | 35.13 | 16.23 | 34.27 | 13.98 | | PubMedBERT | 33.98 | 14.14 | 34.00 | 13.98 | 35.66 | 15.59 | 33.87 | 14.79 | 35.44 | 14.79 | | CamemBERT-base | 36.24 | 16.55 | 34.19 | 14.46 | 34.78 | 15.43 | 34.66 | 14.79 | 34.61 | 14.95 | | XLM-RoBERTa-base | 37.92 | 17.20 | 31.26 | 11.89 | 35.84 | 16.07 | 32.47 | 14.63 | 33.00 | 14.95 | | BART-base | 31.93 | 15.91 | 34.98 | **18.64** | 33.80 | 17.68 | 29.65 | 12.86 | 34.65 | 18.32 | ## BibTeX Citations Please cite the following paper when using this model. FrenchMedMCQA corpus and linked tools: ```latex @unpublished{labrak:hal-03824241, TITLE = {{FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain}}, AUTHOR = {Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Daille, B{\'e}atrice and Gourraud, Pierre-Antoine and Morin, Emmanuel and Rouvier, Mickael}, URL = {https://hal.archives-ouvertes.fr/hal-03824241}, NOTE = {working paper or preprint}, YEAR = {2022}, MONTH = Oct, PDF = {https://hal.archives-ouvertes.fr/hal-03824241/file/LOUHI_2022___QA-3.pdf}, HAL_ID = {hal-03824241}, HAL_VERSION = {v1}, } ``` HuggingFace's Transformers : ```latex @misc{https://doi.org/10.48550/arxiv.1910.03771, doi = {10.48550/ARXIV.1910.03771}, url = {https://arxiv.org/abs/1910.03771}, author = {Wolf, Thomas and Debut, Lysandre and Sanh, Victor and Chaumond, Julien and Delangue, Clement and Moi, Anthony and Cistac, Pierric and Rault, Tim and Louf, Rémi and Funtowicz, Morgan and Davison, Joe and Shleifer, Sam and von Platen, Patrick and Ma, Clara and Jernite, Yacine and Plu, Julien and Xu, Canwen and Scao, Teven Le and Gugger, Sylvain and Drame, Mariama and Lhoest, Quentin and Rush, Alexander M.}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {HuggingFace's Transformers: State-of-the-art Natural Language Processing}, publisher = {arXiv}, year = {2019}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ## Acknowledgment This work was financially supported by [Zenidoc](https://zenidoc.fr/), the [DIETS](https://anr-diets.univ-avignon.fr/) project financed by the Agence Nationale de la Recherche (ANR) under contract ANR-20-CE23-0005 and the ANR [AIBy4](https://aiby4.ls2n.fr/) (ANR-20-THIA-0011).
esc-bench/conformer-rnnt-switchboard
esc-bench
2022-10-24T11:59:22Z
6
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:ldc/switchboard", "region:us" ]
null
2022-10-03T09:18:39Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/switchboard --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="switchboard" \ --output_dir="./" \ --run_name="conformer-rnnt-switchboard" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
Lwhieldon/distilbert-base-uncased-finetuned-emotion
Lwhieldon
2022-10-24T11:58:12Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-11T18:49:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.928 - name: F1 type: f1 value: 0.9280714609088352 --- # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2185 - Accuracy: 0.928 - F1: 0.9281 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8374 | 1.0 | 250 | 0.3188 | 0.9045 | 0.9012 | | 0.254 | 2.0 | 500 | 0.2185 | 0.928 | 0.9281 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cpu - Datasets 2.4.0 - Tokenizers 0.12.1
esc-bench/conformer-rnnt-earnings22
esc-bench
2022-10-24T11:55:33Z
8
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:revdotcom/earnings22", "region:us" ]
null
2022-10-03T09:19:24Z
--- language: - en tags: - esb datasets: - esb/datasets - revdotcom/earnings22 --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="earnings22" \ --output_dir="./" \ --run_name="conformer-rnnt-earnings22" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esc-bench/conformer-rnnt-tedlium
esc-bench
2022-10-24T11:49:03Z
4
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:LIUM/tedlium", "region:us" ]
null
2022-10-03T08:54:27Z
--- language: - en tags: - esb datasets: - esb/datasets - LIUM/tedlium --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="tedlium" \ --output_dir="./" \ --run_name="rnnt-tedlium-baseline" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esc-bench/conformer-rnnt-common_voice
esc-bench
2022-10-24T11:47:37Z
9
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:mozilla-foundation/common_voice_9_0", "region:us" ]
null
2022-10-03T08:52:28Z
--- language: - en tags: - esb datasets: - esb/datasets - mozilla-foundation/common_voice_9_0 --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="common_voice" \ --output_dir="./" \ --run_name="conformer-rnnt-common-voice" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --max_eval_duration_in_seconds="20" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
esc-bench/conformer-rnnt-librispeech
esc-bench
2022-10-24T11:45:55Z
6
0
nemo
[ "nemo", "esb", "en", "dataset:esb/datasets", "dataset:librispeech_asr", "region:us" ]
null
2022-10-03T08:49:07Z
--- language: - en tags: - esb datasets: - esb/datasets - librispeech_asr --- To reproduce this run, first install NVIDIA NeMo according to the [official instructions](https://github.com/NVIDIA/NeMo#installation), then execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esb/datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="librispeech" \ --output_dir="./" \ --run_name="conformer-rnnt-librispeech" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
Aunsiels/ChildGPT
Aunsiels
2022-10-24T11:37:39Z
21
3
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "children", "infant", "en", "dataset:Aunsiels/InfantBooks", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-19T08:59:58Z
--- language: - en tags: - children - infant datasets: - Aunsiels/InfantBooks --- A GPT2-model finetuned on children's books. ``` Romero, J., & Razniewski, S. (2022). Do Children Texts Hold The Key To Commonsense Knowledge? In Proceedings of the 2022 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. ```
esc-bench/whisper-aed-switchboard
esc-bench
2022-10-24T11:37:35Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:ldc/switchboard", "region:us" ]
null
2022-10-03T07:54:54Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/switchboard --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="switchboard" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-switchboard" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-switchboard" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esc-bench/whisper-aed-tedlium
esc-bench
2022-10-24T11:37:18Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:LIUM/tedlium", "region:us" ]
null
2022-10-03T07:56:33Z
--- language: - en tags: - esb datasets: - esb/datasets - LIUM/tedlium --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="tedlium" \ --max_steps="2500" \ --output_dir="./" \ --run_name="whisper-tedlium" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esc-bench/whisper-aed-common_voice
esc-bench
2022-10-24T11:37:15Z
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:mozilla-foundation/common_voice_9_0", "region:us" ]
null
2022-10-03T07:57:09Z
--- language: - en tags: - esb datasets: - esb/datasets - mozilla-foundation/common_voice_9_0 --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="common_voice" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-common-voice" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --max_eval_duration_in_seconds="20" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
esc-bench/wav2vec2-aed-switchboard
esc-bench
2022-10-24T10:51:50Z
3
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:ldc/switchboard", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T15:36:32Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/switchboard --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="switchboard" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-switchboard" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="2" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="260" \ --final_generation_num_beams="5" \ --generation_length_penalty="0.8" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-aed-earnings22
esc-bench
2022-10-24T10:48:40Z
5
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:revdotcom/earnings22", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T14:39:19Z
--- language: - en tags: - esb datasets: - esb/datasets - revdotcom/earnings22 --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="earnings22" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-earnings22" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="25" \ --max_steps="50000" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --generation_length_penalty="1.2" \ --final_generation_max_length="200" \ --final_generation_num_beams="5" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-aed-gigaspeech
esc-bench
2022-10-24T10:45:47Z
5
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:speechcolab/gigaspeech", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T14:39:14Z
--- language: - en tags: - esb datasets: - esb/datasets - speechcolab/gigaspeech --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="gigaspeech" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-gigaspeech" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="2" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="200" \ --final_generation_num_beams="14" \ --generation_length_penalty="1.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-aed-tedlium
esc-bench
2022-10-24T10:42:16Z
7
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:LIUM/tedlium", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T14:39:08Z
--- language: - en tags: - esb datasets: - esb/datasets - LIUM/tedlium --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-tedlium" \ --dataset_config_name="tedlium" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-tedlium" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="2" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="250" \ --final_generation_num_beams="12" \ --generation_length_penalty="1.5" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-aed-common_voice
esc-bench
2022-10-24T10:39:50Z
5
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:mozilla-foundation/common_voice_9_0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T14:39:06Z
--- language: - en tags: - esb datasets: - esb/datasets - mozilla-foundation/common_voice_9_0 --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="common_voice" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-common-voice" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="2" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="200" \ --generation_num_beams="14" \ --generation_length_penalty="1.2" \ --max_eval_duration_in_seconds="20" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-aed-librispeech
esc-bench
2022-10-24T10:37:46Z
4
0
transformers
[ "transformers", "jax", "speech-encoder-decoder", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T14:39:03Z
--- language: - en tags: - esb datasets: - esb/datasets - librispeech_asr --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esb/datasets" \ --model_name_or_path="esb/wav2vec2-aed-pretrained" \ --dataset_config_name="librispeech" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-librispeech" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="2" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="300" \ --final_generation_num_beams="12" \ --generation_length_penalty="1.6" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-ctc-chime4
esc-bench
2022-10-24T10:35:20Z
4
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:ldc/chime-4", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T16:41:23Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/chime-4 --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-chime4-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="chime4" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-chime4" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-ctc-switchboard
esc-bench
2022-10-24T10:34:16Z
5
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:ldc/switchboard", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T16:39:37Z
--- language: - en tags: - esb datasets: - esb/datasets - ldc/switchboard --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-switchboard-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="switchboard" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-switchboard" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
lotrtt/layoutlmv3-finetuned-cord_100
lotrtt
2022-10-24T10:33:41Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv3", "token-classification", "generated_from_trainer", "dataset:cord-layoutlmv3", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T09:38:49Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - cord-layoutlmv3 metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-finetuned-cord_100 results: - task: name: Token Classification type: token-classification dataset: name: cord-layoutlmv3 type: cord-layoutlmv3 config: cord split: train args: cord metrics: - name: Precision type: precision value: 0.9387001477104875 - name: Recall type: recall value: 0.9513473053892215 - name: F1 type: f1 value: 0.9449814126394053 - name: Accuracy type: accuracy value: 0.9567062818336163 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-cord_100 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 0.2137 - Precision: 0.9387 - Recall: 0.9513 - F1: 0.9450 - Accuracy: 0.9567 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.56 | 250 | 1.0609 | 0.6596 | 0.7440 | 0.6993 | 0.7687 | | 1.4193 | 3.12 | 500 | 0.5989 | 0.8403 | 0.8623 | 0.8511 | 0.8663 | | 1.4193 | 4.69 | 750 | 0.4037 | 0.8795 | 0.9012 | 0.8902 | 0.9087 | | 0.4182 | 6.25 | 1000 | 0.3264 | 0.8980 | 0.9162 | 0.9070 | 0.9257 | | 0.4182 | 7.81 | 1250 | 0.2705 | 0.9190 | 0.9341 | 0.9265 | 0.9410 | | 0.2258 | 9.38 | 1500 | 0.2450 | 0.9311 | 0.9401 | 0.9356 | 0.9461 | | 0.2258 | 10.94 | 1750 | 0.2350 | 0.9341 | 0.9439 | 0.9389 | 0.9491 | | 0.1576 | 12.5 | 2000 | 0.2219 | 0.9350 | 0.9476 | 0.9413 | 0.9508 | | 0.1576 | 14.06 | 2250 | 0.2122 | 0.9373 | 0.9506 | 0.9439 | 0.9559 | | 0.1207 | 15.62 | 2500 | 0.2137 | 0.9387 | 0.9513 | 0.9450 | 0.9567 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
esc-bench/wav2vec2-ctc-ami
esc-bench
2022-10-24T10:33:28Z
4
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:edinburghcstr/ami", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T16:38:02Z
--- language: - en tags: - esb datasets: - esb/datasets - edinburghcstr/ami --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-ami-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="ami" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-ami" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-ctc-earnings22
esc-bench
2022-10-24T10:32:37Z
3
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:revdotcom/earnings22", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T16:36:27Z
--- language: - en tags: - esb datasets: - esb/datasets - revdotcom/earnings22 --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-earnings22-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="earnings22" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-earnings22" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-ctc-spgispeech
esc-bench
2022-10-24T10:31:52Z
5
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:kensho/spgispeech", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T16:34:49Z
--- language: - en tags: - esb datasets: - esb/datasets - kensho/spgispeech --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-spgispeech-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="spgispeech" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-spgispeech" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
esc-bench/wav2vec2-ctc-librispeech
esc-bench
2022-10-24T10:27:51Z
3
0
transformers
[ "transformers", "jax", "wav2vec2", "automatic-speech-recognition", "esb", "en", "dataset:esb/datasets", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-30T16:48:31Z
--- language: - en tags: - esb datasets: - esb/datasets - librispeech_asr --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esb/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-librispeech-tokenizer" \ --dataset_name="esb/datasets" \ --dataset_config_name="librispeech" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-librispeech" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
pcoloc/autotrain-mikrotik-7-7-1860563590
pcoloc
2022-10-24T10:15:30Z
4
0
transformers
[ "transformers", "joblib", "autotrain", "tabular", "regression", "tabular-regression", "dataset:pcoloc/autotrain-data-mikrotik-7-7", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
tabular-regression
2022-10-24T09:56:43Z
--- tags: - autotrain - tabular - regression - tabular-regression datasets: - pcoloc/autotrain-data-mikrotik-7-7 co2_eq_emissions: emissions: 7.1011693391153115 --- # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 1860563590 - CO2 Emissions (in grams): 7.1012 ## Validation Metrics - Loss: 52.881 - R2: 0.584 - MSE: 2796.357 - MAE: 37.116 - RMSLE: 0.518 ## Usage ```python import json import joblib import pandas as pd model = joblib.load('model.joblib') config = json.load(open('config.json')) features = config['features'] # data = pd.read_csv("data.csv") data = data[features] data.columns = ["feat_" + str(col) for col in data.columns] predictions = model.predict(data) # or model.predict_proba(data) ```
teacookies/autotrain-24102022-cert7-1860363608
teacookies
2022-10-24T10:14:31Z
15
0
transformers
[ "transformers", "pytorch", "autotrain", "token-classification", "unk", "dataset:teacookies/autotrain-data-24102022-cert7", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T10:03:38Z
--- tags: - autotrain - token-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - teacookies/autotrain-data-24102022-cert7 co2_eq_emissions: emissions: 0.0825722192587215 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1860363608 - CO2 Emissions (in grams): 0.0826 ## Validation Metrics - Loss: 0.002 - Accuracy: 0.999 - Precision: 0.972 - Recall: 0.983 - F1: 0.978 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-24102022-cert7-1860363608 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-24102022-cert7-1860363608", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-24102022-cert7-1860363608", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
edbeeching/mujoco_swimmer_1111
edbeeching
2022-10-24T09:40:39Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T09:40:24Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: mujoco_swimmer type: mujoco_swimmer metrics: - type: mean_reward value: 95.68 +/- 3.27 name: mean_reward verified: false --- A(n) **APPO** model trained on the **mujoco_swimmer** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/mujoco_humanoid_1111
edbeeching
2022-10-24T09:39:01Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-24T09:38:44Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: mujoco_humanoid type: mujoco_humanoid metrics: - type: mean_reward value: 7279.89 +/- 39.97 name: mean_reward verified: false --- A(n) **APPO** model trained on the **mujoco_humanoid** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
Ahmed-Abousetta/autotrain-abunawaf-user-1860163586
Ahmed-Abousetta
2022-10-24T09:13:41Z
2
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-user", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:12:38Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-user co2_eq_emissions: emissions: 1.2062625201613788 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1860163586 - CO2 Emissions (in grams): 1.2063 ## Validation Metrics - Loss: 0.312 - Accuracy: 0.890 - Precision: 0.720 - Recall: 0.735 - AUC: 0.883 - F1: 0.727 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-user-1860163586 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163586", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163586", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-user-1860163587
Ahmed-Abousetta
2022-10-24T09:13:41Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-user", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:12:45Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-user co2_eq_emissions: emissions: 1.58506390711431 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1860163587 - CO2 Emissions (in grams): 1.5851 ## Validation Metrics - Loss: 0.339 - Accuracy: 0.878 - Precision: 0.702 - Recall: 0.673 - AUC: 0.852 - F1: 0.688 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-user-1860163587 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163587", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163587", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-user-1860163585
Ahmed-Abousetta
2022-10-24T09:13:23Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-user", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:12:32Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-user co2_eq_emissions: emissions: 1.0008458491802985 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1860163585 - CO2 Emissions (in grams): 1.0008 ## Validation Metrics - Loss: 0.304 - Accuracy: 0.890 - Precision: 0.729 - Recall: 0.714 - AUC: 0.889 - F1: 0.722 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-user-1860163585 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163585", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163585", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-user-1860163583
Ahmed-Abousetta
2022-10-24T09:13:09Z
2
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-user", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:12:22Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-user co2_eq_emissions: emissions: 0.6436453501778651 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1860163583 - CO2 Emissions (in grams): 0.6436 ## Validation Metrics - Loss: 0.344 - Accuracy: 0.869 - Precision: 0.698 - Recall: 0.612 - AUC: 0.856 - F1: 0.652 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-user-1860163583 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163583", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-user-1860163583", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-performance-1860063570
Ahmed-Abousetta
2022-10-24T09:09:57Z
2
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-performance", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:09:08Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-performance co2_eq_emissions: emissions: 0.9744207053095343 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1860063570 - CO2 Emissions (in grams): 0.9744 ## Validation Metrics - Loss: 0.435 - Accuracy: 0.824 - Precision: 0.853 - Recall: 0.775 - AUC: 0.885 - F1: 0.812 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-performance-1860063570 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-performance-1860063570", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-performance-1860063570", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-performance-1860063569
Ahmed-Abousetta
2022-10-24T09:09:47Z
4
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-performance", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:09:03Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-performance co2_eq_emissions: emissions: 0.6232110285492835 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1860063569 - CO2 Emissions (in grams): 0.6232 ## Validation Metrics - Loss: 0.430 - Accuracy: 0.841 - Precision: 0.846 - Recall: 0.825 - AUC: 0.873 - F1: 0.835 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-performance-1860063569 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-performance-1860063569", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-performance-1860063569", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-performance-1860063568
Ahmed-Abousetta
2022-10-24T09:09:44Z
2
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-performance", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:09:00Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-performance co2_eq_emissions: emissions: 1.0292657249217085 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1860063568 - CO2 Emissions (in grams): 1.0293 ## Validation Metrics - Loss: 0.453 - Accuracy: 0.812 - Precision: 0.836 - Recall: 0.767 - AUC: 0.860 - F1: 0.800 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-performance-1860063568 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-performance-1860063568", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-performance-1860063568", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963567
Ahmed-Abousetta
2022-10-24T09:06:18Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-interaction", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:05:25Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-interaction co2_eq_emissions: emissions: 1.0555869183889894 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1859963567 - CO2 Emissions (in grams): 1.0556 ## Validation Metrics - Loss: 0.263 - Accuracy: 0.910 - Precision: 0.945 - Recall: 0.923 - AUC: 0.945 - F1: 0.934 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963567 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963567", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963567", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963565
Ahmed-Abousetta
2022-10-24T09:06:06Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-interaction", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:05:13Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-interaction co2_eq_emissions: emissions: 0.6502317465394943 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1859963565 - CO2 Emissions (in grams): 0.6502 ## Validation Metrics - Loss: 0.241 - Accuracy: 0.922 - Precision: 0.936 - Recall: 0.953 - AUC: 0.951 - F1: 0.944 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963565 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963565", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-interaction-1859963565", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-information-1859863561
Ahmed-Abousetta
2022-10-24T09:02:06Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-information", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:01:02Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-information co2_eq_emissions: emissions: 1.5884381963682959 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1859863561 - CO2 Emissions (in grams): 1.5884 ## Validation Metrics - Loss: 0.338 - Accuracy: 0.869 - Precision: 0.836 - Recall: 0.868 - AUC: 0.932 - F1: 0.852 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-information-1859863561 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863561", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863561", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-information-1859863560
Ahmed-Abousetta
2022-10-24T09:01:53Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-information", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T09:00:57Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-information co2_eq_emissions: emissions: 1.8754846173690543 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1859863560 - CO2 Emissions (in grams): 1.8755 ## Validation Metrics - Loss: 0.331 - Accuracy: 0.878 - Precision: 0.852 - Recall: 0.868 - AUC: 0.927 - F1: 0.860 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-information-1859863560 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863560", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-information-1859863560", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-cognition-auto-1859563554
Ahmed-Abousetta
2022-10-24T08:55:55Z
2
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "en", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-cognition-auto", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T08:54:38Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-cognition-auto co2_eq_emissions: emissions: 1.1747519267416993 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1859563554 - CO2 Emissions (in grams): 1.1748 ## Validation Metrics - Loss: 0.455 - Accuracy: 0.813 - Precision: 0.722 - Recall: 0.892 - AUC: 0.872 - F1: 0.798 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-cognition-auto-1859563554 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-auto-1859563554", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-auto-1859563554", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363551
Ahmed-Abousetta
2022-10-24T08:46:21Z
4
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-cognition", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T08:45:09Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-cognition co2_eq_emissions: emissions: 1.7828199447393138 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1859363551 - CO2 Emissions (in grams): 1.7828 ## Validation Metrics - Loss: 0.372 - Accuracy: 0.858 - Precision: 0.796 - Recall: 0.882 - AUC: 0.919 - F1: 0.837 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363551 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363551", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363551", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363552
Ahmed-Abousetta
2022-10-24T08:46:12Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-cognition", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T08:45:15Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-cognition co2_eq_emissions: emissions: 1.1831906042914635 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1859363552 - CO2 Emissions (in grams): 1.1832 ## Validation Metrics - Loss: 0.369 - Accuracy: 0.854 - Precision: 0.811 - Recall: 0.843 - AUC: 0.912 - F1: 0.827 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363552 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363552", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363552", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363549
Ahmed-Abousetta
2022-10-24T08:45:44Z
1
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "unk", "dataset:Ahmed-Abousetta/autotrain-data-abunawaf-cognition", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T08:44:55Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ahmed-Abousetta/autotrain-data-abunawaf-cognition co2_eq_emissions: emissions: 1.0566666951225436 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1859363549 - CO2 Emissions (in grams): 1.0567 ## Validation Metrics - Loss: 0.385 - Accuracy: 0.854 - Precision: 0.795 - Recall: 0.873 - AUC: 0.900 - F1: 0.832 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363549 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363549", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363549", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
kimsiun/kaers-bert
kimsiun
2022-10-24T08:18:46Z
0
0
null
[ "pytorch", "license:mit", "region:us" ]
null
2022-10-24T07:57:35Z
--- license: mit --- # KAERS-BERT - KoBERT + KAERS BERT Model The Publicly Available KAERS BERT Embeddings paper contains the unique KAERS-BERT model: initialized with KoBERT (skt/kobert-base-v1) & trained on adverse events (ADEs) narratives reported through KAERS (Korean Adverse Event Reporting System). This model card describes the KoBERT model. ## Pretraining Data The KAERS-BERT model was trained on 1.2 million ADE narratives reported through KAERS between January 1, 2015 and December 31, 2019. The ADE narratives used for pertaining were mainly written in Korean. ## Model Pretraining ### Note Preprocessing We only used ADE narratives reported as 'disease history in detail', 'adverse event in detail', and 'laboratory test in detail' for model pertaining, because ADE narratives of '(original) reporter's opinion' were highly redundant.
teacookies/autotrain-24102022-cert5-1858763528
teacookies
2022-10-24T08:02:36Z
13
0
transformers
[ "transformers", "pytorch", "autotrain", "token-classification", "unk", "dataset:teacookies/autotrain-data-24102022-cert5", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T07:53:17Z
--- tags: - autotrain - token-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - teacookies/autotrain-data-24102022-cert5 co2_eq_emissions: emissions: 15.97111881210848 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1858763528 - CO2 Emissions (in grams): 15.9711 ## Validation Metrics - Loss: 0.003 - Accuracy: 0.999 - Precision: 0.961 - Recall: 0.970 - F1: 0.966 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-24102022-cert5-1858763528 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-24102022-cert5-1858763528", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-24102022-cert5-1858763528", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
haoanh98/phoGPT_base
haoanh98
2022-10-24T06:54:16Z
3
0
transformers
[ "transformers", "tf", "gpt2", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2022-10-24T06:51:43Z
--- tags: - generated_from_keras_callback model-index: - name: phoGPT_base results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # phoGPT_base This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Tokenizers 0.13.1
haoanh98/mGPT_base
haoanh98
2022-10-24T06:35:40Z
3
0
transformers
[ "transformers", "tf", "gpt2", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2022-10-24T06:01:37Z
--- tags: - generated_from_keras_callback model-index: - name: mGPT_base results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mGPT_base This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Tokenizers 0.13.1
thisisHJLee/wav2vec2-large-xls-r-300m-korean-ws1
thisisHJLee
2022-10-24T06:17:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-24T01:36:26Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-korean-ws1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-korean-ws1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0431 - Cer: 0.0047 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.8176 | 1.0 | 4451 | 0.7022 | 0.2494 | | 0.3505 | 2.0 | 8902 | 0.1369 | 0.0303 | | 0.1696 | 3.0 | 13353 | 0.0431 | 0.0047 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
teacookies/autotrain-24102022-cert4-1858363508
teacookies
2022-10-24T06:11:13Z
17
0
transformers
[ "transformers", "pytorch", "autotrain", "token-classification", "unk", "dataset:teacookies/autotrain-data-24102022-cert4", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
token-classification
2022-10-24T05:59:47Z
--- tags: - autotrain - token-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - teacookies/autotrain-data-24102022-cert4 co2_eq_emissions: emissions: 19.82493725454133 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1858363508 - CO2 Emissions (in grams): 19.8249 ## Validation Metrics - Loss: 0.003 - Accuracy: 0.999 - Precision: 0.963 - Recall: 0.971 - F1: 0.967 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-24102022-cert4-1858363508 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-24102022-cert4-1858363508", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-24102022-cert4-1858363508", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
kem000123/autotrain-cat_vs_dogs-1858163503
kem000123
2022-10-24T05:44:23Z
37
2
transformers
[ "transformers", "pytorch", "autotrain", "vision", "image-classification", "dataset:kem000123/autotrain-data-cat_vs_dogs", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
image-classification
2022-10-24T05:43:29Z
--- tags: - autotrain - vision - image-classification datasets: - kem000123/autotrain-data-cat_vs_dogs widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 0.7950743476524714 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1858163503 - CO2 Emissions (in grams): 0.7951 ## Validation Metrics - Loss: 0.007 - Accuracy: 1.000 - Precision: 1.000 - Recall: 1.000 - AUC: 1.000 - F1: 1.000
weicap/Comentarios_AgresivosNoAgresivos
weicap
2022-10-24T04:00:19Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T03:18:02Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: Comentarios_AgresivosNoAgresivos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Comentarios_AgresivosNoAgresivos This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4584 - Accuracy: 0.8162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6215 | 1.0 | 154 | 0.5717 | 0.7299 | | 0.5075 | 2.0 | 308 | 0.4193 | 0.8248 | | 0.2436 | 3.0 | 462 | 0.4037 | 0.8540 | | 0.0571 | 4.0 | 616 | 0.6594 | 0.8467 | | 0.0242 | 5.0 | 770 | 1.0059 | 0.8029 | | 0.0497 | 6.0 | 924 | 0.8195 | 0.8394 | | 0.0005 | 7.0 | 1078 | 0.9234 | 0.8394 | | 0.0528 | 8.0 | 1232 | 0.8894 | 0.8394 | | 0.0003 | 9.0 | 1386 | 0.9285 | 0.8321 | | 0.0003 | 10.0 | 1540 | 0.9749 | 0.8321 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
0xrushi/TestPlaygroundSkops
0xrushi
2022-10-24T03:48:58Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-10-16T01:13:19Z
--- license: mit --- # Model description 1 [More Information Needed] ## Intended uses & limitations [More Information Needed] ## Training Procedure ### Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameter | Value | |-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | memory | | | steps | [('transformation', ColumnTransformer(transformers=[('loading_missing_value_imputer',<br /> SimpleImputer(), ['loading']),<br /> ('numerical_missing_value_imputer',<br /> SimpleImputer(),<br /> ['loading', 'measurement_3', 'measurement_4',<br /> 'measurement_5', 'measurement_6',<br /> 'measurement_7', 'measurement_8',<br /> 'measurement_9', 'measurement_10',<br /> 'measurement_11', 'measurement_12',<br /> 'measurement_13', 'measurement_14',<br /> 'measurement_15', 'measurement_16',<br /> 'measurement_17']),<br /> ('attribute_0_encoder', OneHotEncoder(),<br /> ['attribute_0']),<br /> ('attribute_1_encoder', OneHotEncoder(),<br /> ['attribute_1']),<br /> ('product_code_encoder', OneHotEncoder(),<br /> ['product_code'])])), ('model', DecisionTreeClassifier(max_depth=4))] | | verbose | False | | transformation | ColumnTransformer(transformers=[('loading_missing_value_imputer',<br /> SimpleImputer(), ['loading']),<br /> ('numerical_missing_value_imputer',<br /> SimpleImputer(),<br /> ['loading', 'measurement_3', 'measurement_4',<br /> 'measurement_5', 'measurement_6',<br /> 'measurement_7', 'measurement_8',<br /> 'measurement_9', 'measurement_10',<br /> 'measurement_11', 'measurement_12',<br /> 'measurement_13', 'measurement_14',<br /> 'measurement_15', 'measurement_16',<br /> 'measurement_17']),<br /> ('attribute_0_encoder', OneHotEncoder(),<br /> ['attribute_0']),<br /> ('attribute_1_encoder', OneHotEncoder(),<br /> ['attribute_1']),<br /> ('product_code_encoder', OneHotEncoder(),<br /> ['product_code'])]) | | model | DecisionTreeClassifier(max_depth=4) | | transformation__n_jobs | | | transformation__remainder | drop | | transformation__sparse_threshold | 0.3 | | transformation__transformer_weights | | | transformation__transformers | [('loading_missing_value_imputer', SimpleImputer(), ['loading']), ('numerical_missing_value_imputer', SimpleImputer(), ['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']), ('attribute_0_encoder', OneHotEncoder(), ['attribute_0']), ('attribute_1_encoder', OneHotEncoder(), ['attribute_1']), ('product_code_encoder', OneHotEncoder(), ['product_code'])] | | transformation__verbose | False | | transformation__verbose_feature_names_out | True | | transformation__loading_missing_value_imputer | SimpleImputer() | | transformation__numerical_missing_value_imputer | SimpleImputer() | | transformation__attribute_0_encoder | OneHotEncoder() | | transformation__attribute_1_encoder | OneHotEncoder() | | transformation__product_code_encoder | OneHotEncoder() | | transformation__loading_missing_value_imputer__add_indicator | False | | transformation__loading_missing_value_imputer__copy | True | | transformation__loading_missing_value_imputer__fill_value | | | transformation__loading_missing_value_imputer__missing_values | nan | | transformation__loading_missing_value_imputer__strategy | mean | | transformation__loading_missing_value_imputer__verbose | 0 | | transformation__numerical_missing_value_imputer__add_indicator | False | | transformation__numerical_missing_value_imputer__copy | True | | transformation__numerical_missing_value_imputer__fill_value | | | transformation__numerical_missing_value_imputer__missing_values | nan | | transformation__numerical_missing_value_imputer__strategy | mean | | transformation__numerical_missing_value_imputer__verbose | 0 | | transformation__attribute_0_encoder__categories | auto | | transformation__attribute_0_encoder__drop | | | transformation__attribute_0_encoder__dtype | <class 'numpy.float64'> | | transformation__attribute_0_encoder__handle_unknown | error | | transformation__attribute_0_encoder__sparse | True | | transformation__attribute_1_encoder__categories | auto | | transformation__attribute_1_encoder__drop | | | transformation__attribute_1_encoder__dtype | <class 'numpy.float64'> | | transformation__attribute_1_encoder__handle_unknown | error | | transformation__attribute_1_encoder__sparse | True | | transformation__product_code_encoder__categories | auto | | transformation__product_code_encoder__drop | | | transformation__product_code_encoder__dtype | <class 'numpy.float64'> | | transformation__product_code_encoder__handle_unknown | error | | transformation__product_code_encoder__sparse | True | | model__ccp_alpha | 0.0 | | model__class_weight | | | model__criterion | gini | | model__max_depth | 4 | | model__max_features | | | model__max_leaf_nodes | | | model__min_impurity_decrease | 0.0 | | model__min_samples_leaf | 1 | | model__min_samples_split | 2 | | model__min_weight_fraction_leaf | 0.0 | | model__random_state | | | model__splitter | best | </details> ### Model Plot The model plot is below. <style>#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 {color: black;background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 pre{padding: 0;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-toggleable {background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator:hover {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-item {z-index: 1;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item:only-child::after {width: 0;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-text-repr-fallback {display: none;}</style><div id="sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[(&#x27;transformation&#x27;,ColumnTransformer(transformers=[(&#x27;loading_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;, &#x27;measurement_3&#x27;,&#x27;measurement_4&#x27;,&#x27;measurement_5&#x27;,&#x27;measurement_6&#x27;,&#x27;measurement_7&#x27;,&#x27;measurement_8&#x27;,&#x27;measurement_9&#x27;,&#x27;measurement_10&#x27;,&#x27;measurement_11&#x27;,&#x27;measurement_12&#x27;,&#x27;measurement_13&#x27;,&#x27;measurement_14&#x27;,&#x27;measurement_15&#x27;,&#x27;measurement_16&#x27;,&#x27;measurement_17&#x27;]),(&#x27;attribute_0_encoder&#x27;,OneHotEncoder(),[&#x27;attribute_0&#x27;]),(&#x27;attribute_1_encoder&#x27;,OneHotEncoder(),[&#x27;attribute_1&#x27;]),(&#x27;product_code_encoder&#x27;,OneHotEncoder(),[&#x27;product_code&#x27;])])),(&#x27;model&#x27;, DecisionTreeClassifier(max_depth=4))])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="f3a0413c-728e-4fd9-bbd8-5c6ec5312931" type="checkbox" ><label for="f3a0413c-728e-4fd9-bbd8-5c6ec5312931" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&#x27;transformation&#x27;,ColumnTransformer(transformers=[(&#x27;loading_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;, &#x27;measurement_3&#x27;,&#x27;measurement_4&#x27;,&#x27;measurement_5&#x27;,&#x27;measurement_6&#x27;,&#x27;measurement_7&#x27;,&#x27;measurement_8&#x27;,&#x27;measurement_9&#x27;,&#x27;measurement_10&#x27;,&#x27;measurement_11&#x27;,&#x27;measurement_12&#x27;,&#x27;measurement_13&#x27;,&#x27;measurement_14&#x27;,&#x27;measurement_15&#x27;,&#x27;measurement_16&#x27;,&#x27;measurement_17&#x27;]),(&#x27;attribute_0_encoder&#x27;,OneHotEncoder(),[&#x27;attribute_0&#x27;]),(&#x27;attribute_1_encoder&#x27;,OneHotEncoder(),[&#x27;attribute_1&#x27;]),(&#x27;product_code_encoder&#x27;,OneHotEncoder(),[&#x27;product_code&#x27;])])),(&#x27;model&#x27;, DecisionTreeClassifier(max_depth=4))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="3f892f74-5115-4ab0-9c64-f760f11a7cbe" type="checkbox" ><label for="3f892f74-5115-4ab0-9c64-f760f11a7cbe" class="sk-toggleable__label sk-toggleable__label-arrow">transformation: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[(&#x27;loading_missing_value_imputer&#x27;,SimpleImputer(), [&#x27;loading&#x27;]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;, &#x27;measurement_3&#x27;, &#x27;measurement_4&#x27;,&#x27;measurement_5&#x27;, &#x27;measurement_6&#x27;,&#x27;measurement_7&#x27;, &#x27;measurement_8&#x27;,&#x27;measurement_9&#x27;, &#x27;measurement_10&#x27;,&#x27;measurement_11&#x27;, &#x27;measurement_12&#x27;,&#x27;measurement_13&#x27;, &#x27;measurement_14&#x27;,&#x27;measurement_15&#x27;, &#x27;measurement_16&#x27;,&#x27;measurement_17&#x27;]),(&#x27;attribute_0_encoder&#x27;, OneHotEncoder(),[&#x27;attribute_0&#x27;]),(&#x27;attribute_1_encoder&#x27;, OneHotEncoder(),[&#x27;attribute_1&#x27;]),(&#x27;product_code_encoder&#x27;, OneHotEncoder(),[&#x27;product_code&#x27;])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ec9bebf9-8c02-4785-974c-0e727c4449c0" type="checkbox" ><label for="ec9bebf9-8c02-4785-974c-0e727c4449c0" class="sk-toggleable__label sk-toggleable__label-arrow">loading_missing_value_imputer</label><div class="sk-toggleable__content"><pre>[&#x27;loading&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="572cc9df-a4bb-49b4-b730-d012d99ba876" type="checkbox" ><label for="572cc9df-a4bb-49b4-b730-d012d99ba876" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c6058039-3e65-4724-ad03-96517a382ad6" type="checkbox" ><label for="c6058039-3e65-4724-ad03-96517a382ad6" class="sk-toggleable__label sk-toggleable__label-arrow">numerical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>[&#x27;loading&#x27;, &#x27;measurement_3&#x27;, &#x27;measurement_4&#x27;, &#x27;measurement_5&#x27;, &#x27;measurement_6&#x27;, &#x27;measurement_7&#x27;, &#x27;measurement_8&#x27;, &#x27;measurement_9&#x27;, &#x27;measurement_10&#x27;, &#x27;measurement_11&#x27;, &#x27;measurement_12&#x27;, &#x27;measurement_13&#x27;, &#x27;measurement_14&#x27;, &#x27;measurement_15&#x27;, &#x27;measurement_16&#x27;, &#x27;measurement_17&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="d385b0fd-dfaf-490c-8fda-dc024393a022" type="checkbox" ><label for="d385b0fd-dfaf-490c-8fda-dc024393a022" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="54db5302-69ab-49a1-b939-cb94c0958ab3" type="checkbox" ><label for="54db5302-69ab-49a1-b939-cb94c0958ab3" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_0_encoder</label><div class="sk-toggleable__content"><pre>[&#x27;attribute_0&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c0a718c8-7093-4d45-85ae-847bfac3ec7e" type="checkbox" ><label for="c0a718c8-7093-4d45-85ae-847bfac3ec7e" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="993a1233-2b0d-473e-9bb3-f7c9d0bc654a" type="checkbox" ><label for="993a1233-2b0d-473e-9bb3-f7c9d0bc654a" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_1_encoder</label><div class="sk-toggleable__content"><pre>[&#x27;attribute_1&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="4311756e-5a71-45ce-9005-a1e5448b1c30" type="checkbox" ><label for="4311756e-5a71-45ce-9005-a1e5448b1c30" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="9bfb54df-7509-4669-b6e7-db3520c2d1c4" type="checkbox" ><label for="9bfb54df-7509-4669-b6e7-db3520c2d1c4" class="sk-toggleable__label sk-toggleable__label-arrow">product_code_encoder</label><div class="sk-toggleable__content"><pre>[&#x27;product_code&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="1acc88d7-a436-40f6-99a3-ebfbbc9f897a" type="checkbox" ><label for="1acc88d7-a436-40f6-99a3-ebfbbc9f897a" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="5626883d-68bc-41b4-8913-23b6aed62eb8" type="checkbox" ><label for="5626883d-68bc-41b4-8913-23b6aed62eb8" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(max_depth=4)</pre></div></div></div></div></div></div></div> ## Evaluation Results You can find the details about evaluation process and the evaluation results. | Metric | Value | |----------|---------| # How to Get Started with the Model Use the code below to get started with the model. ```python [More Information Needed] ``` # Model Card Authors This model card is written by following authors: [More Information Needed] # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` # h1 tjos osmda ``` # Model 2 Description (Logistic) --- license: mit --- # Model description [More Information Needed] ## Intended uses & limitations [More Information Needed] ## Training Procedure ### Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameter | Value | |-------------------|-----------| | C | 1.0 | | class_weight | | | dual | False | | fit_intercept | True | | intercept_scaling | 1 | | l1_ratio | | | max_iter | 100 | | multi_class | auto | | n_jobs | | | penalty | l2 | | random_state | 0 | | solver | liblinear | | tol | 0.0001 | | verbose | 0 | | warm_start | False | </details> ### Model Plot The model plot is below. <style>#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 {color: black;background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 pre{padding: 0;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-toggleable {background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator:hover {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-item {z-index: 1;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item:only-child::after {width: 0;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-text-repr-fallback {display: none;}</style><div id="sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>LogisticRegression(random_state=0, solver=&#x27;liblinear&#x27;)</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="51d3cd4d-ea90-43e3-8d6a-5abc1df508b6" type="checkbox" checked><label for="51d3cd4d-ea90-43e3-8d6a-5abc1df508b6" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(random_state=0, solver=&#x27;liblinear&#x27;)</pre></div></div></div></div></div> ## Evaluation Results You can find the details about evaluation process and the evaluation results. | Metric | Value | |----------|---------| | accuracy | 0.96 | | f1 score | 0.96 | # How to Get Started with the Model Use the code below to get started with the model. ```python [More Information Needed] ``` # Model Card Authors This model card is written by following authors: [More Information Needed] # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` [More Information Needed] ``` # Additional Content ## confusion_matrix ![confusion_matrix](confusion_matrix.png)
salascorp/distilroberta-base-mrpc-glue-oscar-salas7
salascorp
2022-10-24T02:49:36Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T01:55:00Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer metrics: - accuracy model-index: - name: distilroberta-base-mrpc-glue-oscar-salas7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-mrpc-glue-oscar-salas7 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 1.7444 - Accuracy: 0.2143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cpu - Datasets 2.6.1 - Tokenizers 0.13.1
declare-lab/dialect
declare-lab
2022-10-24T02:32:35Z
8
6
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2210.02890", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-24T02:28:19Z
--- license: mit widget: - text: "What is or could be the cause of target? <sep> target: Thanks. Will I be able to take a retest ? <sep> context: A: Did I do well on my test ?, <utt> B: Do you want to know the honest answer ?, <utt> A: Why wouldn't I want to know ?, <utt> B: You had pretty bad scores ., <utt> A: Exactly what do you mean by bad ?, <utt> B: You failed ., <utt> A: How'd I fail it ?, <utt> B: There are a couple of reasons why you didn't pass ., <utt> A: What did I do wrong ?, <utt> B: To sum it all up , you really just don't know how to drive ., <utt> A: Thanks. Will I be able to take a retest ?, <utt> B: Sure you can , in about two and a half weeks . " example_title: "Cause 1" - text: "What is or could be the cause of target? <sep> target: But she did and made me disappointed . <sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That's a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it " example_title: "Cause 2" - text: "What subsequent event happens or could happen following the target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it " example_title: "Subsequent Event 1" - text: "What subsequent event happens or could happen following the target? <sep> target: Sure you can , in about two and a half weeks . <sep> context: A: Did I do well on my test ?, <utt> B: Do you want to know the honest answer ?, <utt> A: Why wouldn't I want to know ?, <utt> B: You had pretty bad scores ., <utt> A: Exactly what do you mean by bad ?, <utt> B: You failed ., <utt> A: How'd I fail it ?, <utt> B: There are a couple of reasons why you didn't pass ., <utt> A: What did I do wrong ?, <utt> B: To sum it all up , you really just don't know how to drive ., <utt> A: Thanks. Will I be able to take a retest ?, <utt> B: Sure you can , in about two and a half weeks . " example_title: "Subsequent Event 2" - text: "What is the possible emotional reaction of the listener in response to target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it " example_title: "Emotional Reaction" - text: "What is or could be the motivation of target? <sep> target: Sure you can , in about two and a half weeks . <sep> context: A: Did I do well on my test ?, <utt> B: Do you want to know the honest answer ?, <utt> A: Why wouldn't I want to know ?, <utt> B: You had pretty bad scores ., <utt> A: Exactly what do you mean by bad ?, <utt> B: You failed ., <utt> A: How'd I fail it ?, <utt> B: There are a couple of reasons why you didn't pass ., <utt> A: What did I do wrong ?, <utt> B: To sum it all up , you really just don't know how to drive ., <utt> A: Thanks. Will I be able to take a retest ?, <utt> B: Sure you can , in about two and a half weeks . " example_title: "Motivation" --- ## DIALogue-level Commonsense Transformer (DIALeCT) The pretrained checkpoint for the paper [Multiview Contextual Commonsense Inference: A New Dataset and Task](https://arxiv.org/abs/2210.02890). The model is trained based on the [T5-large](https://huggingface.co/t5-large) checkpoint. ![model image](https://drive.google.com/uc?export=download&id=14RIbxgXhREdu5xZiKn5D-UUzaQLDNLqf) ## Datasets The dataset used to pretrain the model can be obtained from the [CICERO repo](https://github.com/declare-lab/CICERO) following instructions. The Contextualized Commonsense Inference in Dialogues v2 (CICEROv2) consists of annotated commonsense inferences including cause and emotional reaction, etc. The dialogues are from multiple datasets. | Dataset | #Dialogues| #Instances| | -------- | ----- | --------- | | DailyDialog| 1118| 3973| | MuTual| 1011 | 3384| | Dream| 250 | 994| ### Examples Some examples of generated results from the pretrained model (the zero-shot setting). **Subsequent Event** ``` What is or could be the subsequent event of the target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted subsequent event: ``` David's girlfriend apologized to david for her mistake. ``` **Cause** ``` What is or could be the cause of target? <sep> target: Thanks. Will I be able to take a retest ? <sep> context: A: Did I do well on my test ?, <utt> B: Do you want to know the honest answer ?, <utt> A: Why wouldn't I want to know ?, <utt> B: You had pretty bad scores ., <utt> A: Exactly what do you mean by bad ?, <utt> B: You failed ., <utt> A: How'd I fail it ?, <utt> B: There are a couple of reasons why you didn't pass ., <utt> A: What did I do wrong ?, <utt> B: To sum it all up , you really just don't know how to drive ., <utt> A: Thanks. Will I be able to take a retest ?, <utt> B: Sure you can , in about two and a half weeks . ``` Predicted cause: ``` The speaker has failed the driving test. ``` **Emotional Reaction** ``` What is the possible emotional reaction of the listener in response to target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted emotional reaction: ``` The listener is hopeful that david will forgive his girlfriend for her mistake. ``` ## Inference: The input text should be formatted as follows: ``` Question <sep> target: target_utt <sep> context: A: utterance 1 <utt> B: utterance 2 <utt> A: utterance 3 <utt> B: utterance 4 ``` Question: The question against which we want to make the inference. A, B are speaker identifiers The ```target_utt``` should be anyone between ```utterance 1, utterance 2, utterance 3, or utterance 4```. Do not use the speaker identifier in the ```target_utt``` Some samples are provided in the Hosted inference API box examples. ## BibTeX entry and citation info If you use the model, you can cite: ```bibtex @article{Shen2022MultiviewCC, title={Multiview Contextual Commonsense Inference: A New Dataset and Task}, author={Siqi Shen and Deepanway Ghosal and Navonil Majumder and Henry Lim and Rada Mihalcea and Soujanya Poria}, journal={ArXiv}, year={2022}, volume={abs/2210.02890} } ```
BigDL/FSPBT
BigDL
2022-10-24T02:30:51Z
0
0
PyTorch Lightning
[ "PyTorch Lightning", "Image Translation", "license:mit", "region:us" ]
null
2022-06-24T04:05:06Z
--- license: mit library_name: PyTorch Lightning tags: - Image Translation --- ## Model Details This model is from [FSPBT-Image-Translation](https://github.com/rnwzd/FSPBT-Image-Translation) ## Citation Information ```bibtex @Article{Texler20-SIG, author = "Ond\v{r}ej Texler and David Futschik and Michal Ku\v{c}era and Ond\v{r}ej Jamri\v{s}ka and \v{S}\'{a}rka Sochorov\'{a} and Menglei Chai and Sergey Tulyakov and Daniel S\'{y}kora", title = "Interactive Video Stylization Using Few-Shot Patch-Based Training", journal = "ACM Transactions on Graphics", volume = "39", number = "4", pages = "73", year = "2020", } ```
TTian/bert-finetuned-feedback-classifier
TTian
2022-10-24T02:19:29Z
3
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-24T02:19:20Z
--- tags: - generated_from_keras_callback model-index: - name: bert-finetuned-feedback-classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-feedback-classifier This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8251 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 0.8251 | 0 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.1
nickmuchi/setfit-finetuned-financial-text-classification
nickmuchi
2022-10-24T00:16:02Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-23T18:35:23Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # setfit-finetuned-financial-text-classification This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('nickmuchi/setfit-finetuned-financial-text-classification') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 188 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 5.610085660083046e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 188, "warmup_steps": 19, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
theojolliffe/bart-large-cnn-finetuned-roundup
theojolliffe
2022-10-23T23:51:01Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-23T15:16:53Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8956 - Rouge1: 58.1914 - Rouge2: 45.822 - Rougel: 49.4407 - Rougelsum: 56.6379 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.2575 | 1.0 | 795 | 0.9154 | 53.8792 | 34.3203 | 35.8768 | 51.1789 | 142.0 | | 0.7053 | 2.0 | 1590 | 0.7921 | 54.3918 | 35.3346 | 37.7539 | 51.6989 | 142.0 | | 0.5379 | 3.0 | 2385 | 0.7566 | 52.1651 | 32.5699 | 36.3105 | 49.3327 | 141.5185 | | 0.3496 | 4.0 | 3180 | 0.7584 | 54.3258 | 36.403 | 39.6938 | 52.0186 | 142.0 | | 0.2688 | 5.0 | 3975 | 0.7343 | 55.9101 | 39.0709 | 42.4138 | 53.572 | 141.8333 | | 0.1815 | 6.0 | 4770 | 0.7924 | 53.9272 | 36.8138 | 40.0614 | 51.7496 | 142.0 | | 0.1388 | 7.0 | 5565 | 0.7674 | 55.0347 | 38.7978 | 42.0081 | 53.0297 | 142.0 | | 0.1048 | 8.0 | 6360 | 0.7700 | 55.2993 | 39.4075 | 42.6837 | 53.5179 | 141.9815 | | 0.0808 | 9.0 | 7155 | 0.7796 | 56.1508 | 40.0863 | 43.2178 | 53.7908 | 142.0 | | 0.0719 | 10.0 | 7950 | 0.8057 | 56.2302 | 41.3004 | 44.7921 | 54.4304 | 142.0 | | 0.0503 | 11.0 | 8745 | 0.8259 | 55.7603 | 41.0643 | 44.5518 | 54.2305 | 142.0 | | 0.0362 | 12.0 | 9540 | 0.8604 | 55.8612 | 41.5984 | 44.444 | 54.2493 | 142.0 | | 0.0307 | 13.0 | 10335 | 0.8516 | 57.7259 | 44.542 | 47.6724 | 56.0166 | 142.0 | | 0.0241 | 14.0 | 11130 | 0.8826 | 56.7943 | 43.7139 | 47.2866 | 55.1824 | 142.0 | | 0.0193 | 15.0 | 11925 | 0.8856 | 57.4135 | 44.3147 | 47.9136 | 55.8843 | 142.0 | | 0.0154 | 16.0 | 12720 | 0.8956 | 58.1914 | 45.822 | 49.4407 | 56.6379 | 142.0 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
huggingtweets/16pxl
huggingtweets
2022-10-23T23:23:51Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-23T23:21:33Z
--- language: en thumbnail: http://www.huggingtweets.com/16pxl/1666567427101/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1358468632255156224/JtUkil_x_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jubilee ❣️ 2023 CALENDARS OUT NOW</div> <div style="text-align: center; font-size: 14px;">@16pxl</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jubilee ❣️ 2023 CALENDARS OUT NOW. | Data | Jubilee ❣️ 2023 CALENDARS OUT NOW | | --- | --- | | Tweets downloaded | 3229 | | Retweets | 288 | | Short tweets | 228 | | Tweets kept | 2713 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3r6vcjy6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @16pxl's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wix5go1) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wix5go1/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/16pxl') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
vumichien/trillsson3-ft-keyword-spotting-12
vumichien
2022-10-23T23:09:08Z
21
1
transformers
[ "transformers", "pytorch", "trillsson_efficient", "text-classification", "audio-classification", "generated_from_trainer", "dataset:superb", "autotrain_compatible", "endpoints_compatible", "region:us" ]
audio-classification
2022-10-23T07:15:01Z
--- tags: - audio-classification - generated_from_trainer datasets: - superb metrics: - accuracy model-index: - name: trillsson3-ft-keyword-spotting-12 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trillsson3-ft-keyword-spotting-12 This model is a fine-tuned version of [vumichien/nonsemantic-speech-trillsson3](https://huggingface.co/vumichien/nonsemantic-speech-trillsson3) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.3015 - Accuracy: 0.9150 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 64 - seed: 0 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2824 | 1.0 | 1597 | 0.7818 | 0.6892 | | 0.8003 | 2.0 | 3194 | 0.4443 | 0.8735 | | 0.7232 | 3.0 | 4791 | 0.3728 | 0.8833 | | 0.73 | 4.0 | 6388 | 0.3465 | 0.8973 | | 0.7015 | 5.0 | 7985 | 0.3211 | 0.9109 | | 0.6981 | 6.0 | 9582 | 0.3200 | 0.9081 | | 0.6807 | 7.0 | 11179 | 0.3209 | 0.9059 | | 0.6873 | 8.0 | 12776 | 0.3206 | 0.9022 | | 0.6416 | 9.0 | 14373 | 0.3124 | 0.9057 | | 0.6698 | 10.0 | 15970 | 0.3288 | 0.8950 | | 0.716 | 11.0 | 17567 | 0.3147 | 0.8998 | | 0.6514 | 12.0 | 19164 | 0.3034 | 0.9112 | | 0.6513 | 13.0 | 20761 | 0.3091 | 0.9092 | | 0.652 | 14.0 | 22358 | 0.3056 | 0.9100 | | 0.7105 | 15.0 | 23955 | 0.3015 | 0.9150 | | 0.6337 | 16.0 | 25552 | 0.3070 | 0.9091 | | 0.63 | 17.0 | 27149 | 0.3018 | 0.9135 | | 0.6672 | 18.0 | 28746 | 0.3084 | 0.9088 | | 0.6479 | 19.0 | 30343 | 0.3060 | 0.9101 | | 0.6658 | 20.0 | 31940 | 0.3072 | 0.9089 | ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
salascorp/distilroberta-base-mrpc-glue-oscar-salas3
salascorp
2022-10-23T22:20:24Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-23T22:08:39Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-mrpc-glue-oscar-salas3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-mrpc-glue-oscar-salas3 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cpu - Datasets 2.6.1 - Tokenizers 0.13.1
rufimelo/Legal-BERTimbau-large
rufimelo
2022-10-23T22:05:10Z
61
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "pt", "dataset:rufimelo/PortugueseLegalSentences-v0", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-24T22:29:50Z
--- language: - pt thumbnail: "Portugues BERT for the Legal Domain" tags: - bert - pytorch datasets: - rufimelo/PortugueseLegalSentences-v0 license: "mit" widget: - text: "O advogado apresentou [MASK] ao juíz." --- # Legal_BERTimbau ## Introduction Legal_BERTimbau Large is a fine-tuned BERT model based on [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large. "BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large. For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/)." The performance of Language Models can change drastically when there is a domain shift between training and test data. In order create a Portuguese Language Model adapted to a Legal domain, the original BERTimbau model was submitted to a fine-tuning stage where it was performed 1 "PreTraining" epoch over 30 000 legal Portuguese Legal documents available online. (lr: 1e-5) ## Available models | Model | Arch. | #Layers | #Params | | ---------------------------------------- | ---------- | ------- | ------- | |`rufimelo/Legal-BERTimbau-base` |BERT-Base |12 |110M| | `rufimelo/Legal-BERTimbau-large` | BERT-Large | 24 | 335M | ## Usage ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large") model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large") ``` ### Masked language modeling prediction example ```python from transformers import pipeline from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large") model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large") pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer) pipe('O advogado apresentou [MASK] para o juíz') # [{'score': 0.5034703612327576, #'token': 8190, #'token_str': 'recurso', #'sequence': 'O advogado apresentou recurso para o juíz'}, #{'score': 0.07347951829433441, #'token': 21973, #'token_str': 'petição', #'sequence': 'O advogado apresentou petição para o juíz'}, #{'score': 0.05165359005331993, #'token': 4299, #'token_str': 'resposta', #'sequence': 'O advogado apresentou resposta para o juíz'}, #{'score': 0.04611917585134506, #'token': 5265, #'token_str': 'exposição', #'sequence': 'O advogado apresentou exposição para o juíz'}, #{'score': 0.04068068787455559, #'token': 19737, 'token_str': #'alegações', #'sequence': 'O advogado apresentou alegações para o juíz'}] ``` ### For BERT embeddings ```python import torch from transformers import AutoModel model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-large') input_ids = tokenizer.encode('O advogado apresentou recurso para o juíz', return_tensors='pt') with torch.no_grad(): outs = model(input_ids) encoded = outs[0][0, 1:-1] #tensor([[ 0.0328, -0.4292, -0.6230, ..., -0.3048, -0.5674, 0.0157], #[-0.3569, 0.3326, 0.7013, ..., -0.7778, 0.2646, 1.1310], #[ 0.3169, 0.4333, 0.2026, ..., 1.0517, -0.1951, 0.7050], #..., #[-0.3648, -0.8137, -0.4764, ..., -0.2725, -0.4879, 0.6264], #[-0.2264, -0.1821, -0.3011, ..., -0.5428, 0.1429, 0.0509], #[-1.4617, 0.6281, -0.0625, ..., -1.2774, -0.4491, 0.3131]]) ``` ## Citation If you use this work, please cite BERTimbau's work: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } ```
ViktorDo/SciBERT-POWO_Life_Form_Finetuned
ViktorDo
2022-10-23T20:18:38Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-23T19:38:17Z
--- tags: - generated_from_trainer model-index: - name: SciBERT-POWO_Life_Form_Finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SciBERT-POWO_Life_Form_Finetuned This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4591 | 1.0 | 1004 | 0.4599 | | 0.375 | 2.0 | 2008 | 0.4093 | | 0.3167 | 3.0 | 3012 | 0.4135 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
ViktorDo/SciBERT-POWO_Growth_Form_Finetuned
ViktorDo
2022-10-23T19:23:01Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-23T17:45:10Z
--- tags: - generated_from_trainer model-index: - name: SciBERT-POWO_Growth_Form_Finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SciBERT-POWO_Growth_Form_Finetuned This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2566 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2707 | 1.0 | 2160 | 0.2636 | | 0.2385 | 2.0 | 4320 | 0.2418 | | 0.2086 | 3.0 | 6480 | 0.2566 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1