modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
izumi-lab/electra-small-paper-japanese-fin-discriminator | a3fd6c9d28052f5b414fee9e92bae7810fdabcf5 | 2022-03-19T09:40:17.000Z | [
"pytorch",
"electra",
"pretraining",
"ja",
"dataset:wikipedia",
"dataset:securities reports",
"dataset:summaries of financial results",
"arxiv:2003.10555",
"transformers",
"finance",
"license:cc-by-sa-4.0"
] | null | false | izumi-lab | null | izumi-lab/electra-small-paper-japanese-fin-discriminator | 1 | null | transformers | 29,700 | ---
language: ja
license: cc-by-sa-4.0
tags:
- finance
datasets:
- wikipedia
- securities reports
- summaries of financial results
widget:
- text: 流動[MASK]は1億円となりました。
---
# ELECTRA small Japanese finance discriminator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The Wikipedia corpus file is 2.9GB, consisting of approximately 20M sentences.
The financial corpus consists of 2 corpora:
- Summaries of financial results from October 9, 2012, to December 31, 2020
- Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file is 5.2GB, consisting of approximately 27M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1M training steps.
## Citation
**There will be another paper for this pretrained model. Be sure to check here again when you cite.**
```
@inproceedings{suzuki2021fin-bert-electra,
title={金融文書を用いた事前学習言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27},
pages={5-10},
year={2021}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
jack-oh/KoGPT2_finetuned_wellness | fc482badba8f6ebbd85284d9aa3d53c2962e42a1 | 2021-07-05T02:45:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jack-oh | null | jack-oh/KoGPT2_finetuned_wellness | 1 | null | transformers | 29,701 | skt/kogpt2-base-v2에 wellness 및 일상챗봇 데이터를 fine-tuning한 모델입니다. |
jackky46/DialoGPT-medium-got | b70bb379e77365af02291e662c88cb9c362f0253 | 2022-02-15T06:18:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jackky46 | null | jackky46/DialoGPT-medium-got | 1 | null | transformers | 29,702 | ---
tags:
- conversational
---
# Jon Snow DialoGPT Model |
jacksee/biochem-model-first | 7892e69f326b2fc3bff7ac0a2123bd2f6af0c47e | 2021-10-29T06:21:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jacksee | null | jacksee/biochem-model-first | 1 | null | transformers | 29,703 | Entry not found |
jacksee/biochem-model-firstv2 | ebb7f5775f7c47861afa7add85089df06273af72 | 2021-10-29T06:50:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jacksee | null | jacksee/biochem-model-firstv2 | 1 | null | transformers | 29,704 | Entry not found |
jaeyoung/klue-roberta-large-wiki-mlm | c08cfcf2153c5afb22bc1297a4570afe0fea25ed | 2021-10-29T06:39:39.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jaeyoung | null | jaeyoung/klue-roberta-large-wiki-mlm | 1 | 2 | transformers | 29,705 | Entry not found |
jaeyoung/klue_mln_train | 87eb5f2ac895195ae64bbade64e36a4e0204b240 | 2021-10-05T15:36:14.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jaeyoung | null | jaeyoung/klue_mln_train | 1 | null | transformers | 29,706 | Entry not found |
jaeyoung/notuse | 5e5b94d7c72acb3f03c988967a56862a660ac974 | 2021-10-28T11:55:47.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jaeyoung | null | jaeyoung/notuse | 1 | null | transformers | 29,707 | Entry not found |
jakobcassiman/mbart-large-cc25-cnn-dailymail-xsum-nl-test | f9d91591af8c7d997d5c74b918e130c93f49c823 | 2022-02-10T15:23:10.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jakobcassiman | null | jakobcassiman/mbart-large-cc25-cnn-dailymail-xsum-nl-test | 1 | null | transformers | 29,708 | Entry not found |
jaywhypark/test | 510622271db6b8f60683db33fe2add509a813fa5 | 2021-10-11T06:22:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jaywhypark | null | jaywhypark/test | 1 | 1 | transformers | 29,709 | Entry not found |
jcmc/wav2vec-1b-cv8-ir-n | 081fa8741ae003ace9e0c047e1b795f21e1eff22 | 2022-01-30T07:16:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ga-IE",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jcmc | null | jcmc/wav2vec-1b-cv8-ir-n | 1 | null | transformers | 29,710 | ---
language:
- ga-IE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9810
- Wer: 0.4761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.2427 | 15.15 | 500 | 1.4632 | 0.9481 |
| 1.3128 | 30.3 | 1000 | 0.8662 | 0.6195 |
| 0.9403 | 45.45 | 1500 | 0.8163 | 0.5169 |
| 0.6868 | 60.61 | 2000 | 0.8661 | 0.4858 |
| 0.563 | 75.76 | 2500 | 0.9447 | 0.4867 |
| 0.4887 | 90.91 | 3000 | 0.9650 | 0.4823 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
jcmc/wav2vec-1b-cv8-ir | 88947697f310bceb285222fee66f6d239bfd27a4 | 2022-03-24T11:55:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ga-IE",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jcmc | null | jcmc/wav2vec-1b-cv8-ir | 1 | null | transformers | 29,711 | ---
language:
- ga-IE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ga-IE
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec-1b-cv8-ir
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ga-IE
metrics:
- name: Test WER
type: wer
value: 43.7
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8445
- Wer: 0.5585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7135 | 31.24 | 500 | 0.9609 | 0.6926 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
jcmc/wav2vec-cv7-1b-ir | 2d9e27d9fc25b9f8e63f42586017ccc28f3d96af | 2022-03-24T11:55:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ga-IE",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jcmc | null | jcmc/wav2vec-cv7-1b-ir | 1 | null | transformers | 29,712 | ---
language:
- ga-IE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- ga-IE
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec-cv7-1b-ir
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ga-IE
metrics:
- name: Test WER
type: wer
value: 39.1
- name: Test CER
type: cer
value: 16.4
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9562
- Wer: 0.4801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3731 | 15.62 | 500 | 1.5517 | 0.9499 |
| 1.3312 | 31.25 | 1000 | 0.8717 | 0.6189 |
| 0.9135 | 46.86 | 1500 | 0.8299 | 0.5310 |
| 0.6719 | 62.49 | 2000 | 0.8842 | 0.5044 |
| 0.5583 | 78.12 | 2500 | 0.9093 | 0.4801 |
| 0.4728 | 93.74 | 3000 | 0.9488 | 0.4813 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
jcmc/wav2vec2-large-xlsr-53-ir | 6c389dc461fa5f7638f5506b9945955ed72d567f | 2022-01-26T10:35:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ga-IE",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jcmc | null | jcmc/wav2vec2-large-xlsr-53-ir | 1 | null | transformers | 29,713 | ---
language:
- ga-IE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0835
- Wer: 0.7490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1483 | 15.62 | 500 | 3.0498 | 1.0 |
| 2.8449 | 31.25 | 1000 | 2.7790 | 0.9493 |
| 1.8683 | 46.86 | 1500 | 1.2339 | 0.8161 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
jcmc/wav2vec2-xls-r-1b-ir | 7f3926ed13300c6f667d8f1837a4dbf389a9adb2 | 2022-01-27T13:09:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ga-IE",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jcmc | null | jcmc/wav2vec2-xls-r-1b-ir | 1 | null | transformers | 29,714 | ---
language:
- ga-IE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6569
- Wer: 0.8623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1851 | 15.62 | 500 | 1.8067 | 0.9256 |
| 2.1586 | 31.25 | 1000 | 1.7883 | 0.9180 |
| 2.0302 | 46.86 | 1500 | 1.7571 | 0.9192 |
| 1.8706 | 62.49 | 2000 | 1.6314 | 0.8858 |
| 1.7008 | 78.12 | 2500 | 1.6131 | 0.8679 |
| 1.4982 | 93.74 | 3000 | 1.6540 | 0.8650 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
jean-paul/KinyaBERT-large | 99a2ea562f8e3bf0eda774ff7125b12f23972ab7 | 2021-08-29T10:22:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jean-paul | null | jean-paul/KinyaBERT-large | 1 | null | transformers | 29,715 | # Model description
A Pretrained model on the Kinyarwanda language dataset using a masked language modeling (MLM) objective. The BERT model was first introduced in [this paper](https://arxiv.org/abs/1810.04805). This KinyaBERT model was pretrained with uncased tokens which means that no difference between for example ikinyarwanda and Ikinyarwanda.
# Training parameters
#### Dataset
The data set used has both sources from the new articles in Rwanda extracted from different new web pages, dumped Wikipedia files, and the books in Kinyarwanda. The sizes of the sources of data are 72 thousand new articles, three thousand dumped Wikipedia articles, and six books with more than a thousand pages.
#### Hyperparameters
The model was trained with the default configuration of BERT and Trainer from the Huggingface. However, due to some resource computation issues, we kept the number of transformer layers to 12.
# How to use:
1) The model can be used directly with the pipeline for masked language modeling as follows:
```
from transformers import pipeline
the_mask_pipe = pipeline(
"fill-mask",
model='jean-paul/KinyaBERT-large',
tokenizer='jean-paul/KinyaBERT-large',
)
the_mask_pipe("Ejo ndikwiga nagize [MASK] baje kunsura.")
[{'sequence': 'ejo ndikwiga nagize amahirwe baje kunsura.', 'score': 0.3704017996788025, 'token': 1501, 'token_str': 'amahirwe'},
{'sequence': 'ejo ndikwiga nagize ngo baje kunsura.', 'score': 0.30745452642440796, 'token': 196, 'token_str': 'ngo'},
{'sequence': 'ejo ndikwiga nagize agahinda baje kunsura.', 'score': 0.0638100653886795, 'token': 3917, 'token_str': 'agahinda'},
{'sequence': 'ejo ndikwiga nagize ubwoba baje kunsura.', 'score': 0.04934622719883919, 'token': 2387, 'token_str': 'ubwoba'},
{'sequence': 'ejo ndikwiga nagizengo baje kunsura.', 'score': 0.02243402972817421, 'token': 455, 'token_str': '##ngo'}]
```
2) Direct use from the transformer library to get features using AutoModel
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("jean-paul/KinyaBERT-large")
model = AutoModelForMaskedLM.from_pretrained("jean-paul/KinyaBERT-large")
input_text = "Ejo ndikwiga nagize abashyitsi baje kunsura."
encoded_input = tokenizer(input_text, return_tensors='pt')
output = model(**encoded_input)
```
__Note__: We used the huggingface implementations for pretraining BERT from scratch, both the BERT model and the classes needed to do it. |
jeew/xlm-roberta-ckpt-95000 | c20338c1f309f9722d3b043dbab3dd4ca8ff06f8 | 2020-07-14T06:50:56.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | jeew | null | jeew/xlm-roberta-ckpt-95000 | 1 | null | transformers | 29,716 | Entry not found |
jei360/wav2vec2-large-xls-r-300m-TIMIT-test-jay | cf6a942b9aa4c0799943e623ca61819a03223025 | 2022-02-07T13:31:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | jei360 | null | jei360/wav2vec2-large-xls-r-300m-TIMIT-test-jay | 1 | null | transformers | 29,717 | Entry not found |
jenspt/byt5_extra_layer_1024_ft_all_clean_data_SAFETY | b739b88bc44ec4f8c819d1b57aab36b53438ee8a | 2021-12-05T18:50:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jenspt | null | jenspt/byt5_extra_layer_1024_ft_all_clean_data_SAFETY | 1 | null | transformers | 29,718 | Entry not found |
jenspt/byt5_extra_layer_1024_ft_all_clean_data_SAFETY_v2 | 3cc3b877a1e704ce3ef5cd3727fce5a93e42ab3e | 2021-12-04T19:04:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jenspt | null | jenspt/byt5_extra_layer_1024_ft_all_clean_data_SAFETY_v2 | 1 | null | transformers | 29,719 | Entry not found |
jenspt/byt5_ft_all_clean_data | edab272773107d3ba842ce52fa23b1aac2c59be6 | 2021-12-03T13:32:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jenspt | null | jenspt/byt5_ft_all_clean_data | 1 | null | transformers | 29,720 | training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
#per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler (used to be 500)
weight_decay=0.01, # strength of weight decay
#learning_rate=0.1e-3, # default = 5e-5=0.5e-4
logging_dir='./logs', # directory for storing logs
logging_steps=50,
#eval_steps = 100,
overwrite_output_dir = True,
save_strategy = 'epoch',
#logging_strategy = 'epoch',
) |
jenspt/byt5_ft_all_clean_data_lr_1e4 | e5e9200ca48c4e71adc3b140449ac82b550aa348 | 2021-12-03T18:11:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jenspt | null | jenspt/byt5_ft_all_clean_data_lr_1e4 | 1 | null | transformers | 29,721 | training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
#per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler (used to be 500)
weight_decay=0.01, # strength of weight decay
learning_rate=0.1e-3, # default = 5e-5=0.5e-4
logging_dir='./logs', # directory for storing logs
logging_steps=50,
#eval_steps = 100,
overwrite_output_dir = True,
save_strategy = 'epoch',
#logging_strategy = 'epoch',
) |
jenspt/byt5_ft_all_clean_data_ws3000 | fbac5e203a5fda76967964fbe96779e78abb3ec3 | 2021-12-03T13:32:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jenspt | null | jenspt/byt5_ft_all_clean_data_ws3000 | 1 | null | transformers | 29,722 | training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
#per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler (used to be 500)
weight_decay=0.01, # strength of weight decay
#learning_rate=0.1e-3, # default = 5e-5=0.5e-4
logging_dir='./logs', # directory for storing logs
logging_steps=50,
#eval_steps = 100,
overwrite_output_dir = True,
save_strategy = 'epoch',
#logging_strategy = 'epoch',
) |
jessiejohnson/wav2vec2-base-timit-demo-colab | 82cda36492cf326ec56534067d62fbf9f33d9d88 | 2021-11-19T02:51:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | jessiejohnson | null | jessiejohnson/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 29,723 | Entry not found |
jfarray/Model_all-distilroberta-v1_10_Epochs | 800faafba98078a9cebd45afdab8aec97dee4ae1 | 2022-02-13T19:47:38.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_all-distilroberta-v1_10_Epochs | 1 | null | sentence-transformers | 29,724 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_all-distilroberta-v1_1_Epochs | 8a0fefba5e3246814e13cedbb6af56e498197c54 | 2022-02-13T19:34:14.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_all-distilroberta-v1_1_Epochs | 1 | null | sentence-transformers | 29,725 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_bert-base-multilingual-uncased_100_Epochs | 67dc12c104990bd5a73fd244b7c76d88093dca0b | 2022-02-14T20:23:54.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_bert-base-multilingual-uncased_100_Epochs | 1 | null | sentence-transformers | 29,726 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_bert-base-multilingual-uncased_1_Epochs | 2b8adb950c7be9e789f9256f843e3e3f11f6d1bd | 2022-02-13T22:49:37.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_bert-base-multilingual-uncased_1_Epochs | 1 | null | sentence-transformers | 29,727 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_bert-base-multilingual-uncased_30_Epochs | 7552524611a97eee527f0712063c2cc29cc11703 | 2022-02-13T23:54:47.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_bert-base-multilingual-uncased_30_Epochs | 1 | null | sentence-transformers | 29,728 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_bert-base-multilingual-uncased_5_Epochs | 3d7f9a904afcc8b0a9d1371ca9b9ce4e5230e826 | 2022-02-13T23:03:58.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_bert-base-multilingual-uncased_5_Epochs | 1 | null | sentence-transformers | 29,729 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_10_Epochs | 5e0ee9f64983665eb6e5757829ba748505102780 | 2022-02-14T21:06:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_10_Epochs | 1 | null | sentence-transformers | 29,730 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_1_Epochs | 17623903614278282b7136187b1a175af351b700 | 2022-02-14T20:36:45.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_1_Epochs | 1 | null | sentence-transformers | 29,731 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_distiluse-base-multilingual-cased-v1_100_Epochs | 01fb85464d60ee9ff20e55362ffa449720da5153 | 2022-02-12T19:45:48.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_distiluse-base-multilingual-cased-v1_100_Epochs | 1 | null | sentence-transformers | 29,732 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_distiluse-base-multilingual-cased-v1_1_Epochs | 4971f327b1b12da6ae996410444ea304fd97a072 | 2022-04-25T15:29:40.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_distiluse-base-multilingual-cased-v1_1_Epochs | 1 | null | sentence-transformers | 29,733 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_distiluse-base-multilingual-cased-v1_50_Epochs | 73f148cafe8355f92920cb3d31628e1e70027dad | 2022-02-12T14:26:35.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_distiluse-base-multilingual-cased-v1_50_Epochs | 1 | null | sentence-transformers | 29,734 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_1_Epochs | 74a82a304a77d1ab2799869ce958e1e3e6606de2 | 2022-02-12T21:48:20.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jfarray | null | jfarray/Model_paraphrase-multilingual-mpnet-base-v2_1_Epochs | 1 | null | sentence-transformers | 29,735 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_5_Epochs | 3f307c9f44e01253fa6666f83afcc14b7fc2e599 | 2022-02-12T22:09:20.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jfarray | null | jfarray/Model_paraphrase-multilingual-mpnet-base-v2_5_Epochs | 1 | null | sentence-transformers | 29,736 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jgammack/MTL-bert-base-uncased-ww-squad | 4bbd1ff506614947d7e6a324353beba88da1e14b | 2022-02-08T22:16:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | jgammack | null | jgammack/MTL-bert-base-uncased-ww-squad | 1 | null | transformers | 29,737 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: MTL-bert-base-uncased-ww-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased-ww-squad
This model is a fine-tuned version of [jgammack/MTL-bert-base-uncased-ww](https://huggingface.co/jgammack/MTL-bert-base-uncased-ww) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/MTL-bert-base-uncased | a6e1fde8f355be3d26c0bccdf07d9108176ab281 | 2022-02-07T23:09:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jgammack | null | jgammack/MTL-bert-base-uncased | 1 | null | transformers | 29,738 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MTL-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4409 | 1.0 | 99 | 2.1982 |
| 2.2905 | 2.0 | 198 | 2.1643 |
| 2.1974 | 3.0 | 297 | 2.1168 |
| 2.15 | 4.0 | 396 | 2.0023 |
| 2.0823 | 5.0 | 495 | 2.0199 |
| 2.0752 | 6.0 | 594 | 1.9061 |
| 2.0408 | 7.0 | 693 | 1.9770 |
| 1.9984 | 8.0 | 792 | 1.9322 |
| 1.9933 | 9.0 | 891 | 1.9167 |
| 1.9806 | 10.0 | 990 | 1.9652 |
| 1.9436 | 11.0 | 1089 | 1.9308 |
| 1.9491 | 12.0 | 1188 | 1.9064 |
| 1.929 | 13.0 | 1287 | 1.8831 |
| 1.9096 | 14.0 | 1386 | 1.8927 |
| 1.9032 | 15.0 | 1485 | 1.9117 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/MTL-distilbert-base-uncased | 8133100397c21d7a51e3763a9ff6e800e7f8f336 | 2022-02-07T23:23:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jgammack | null | jgammack/MTL-distilbert-base-uncased | 1 | null | transformers | 29,739 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MTL-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5593 | 1.0 | 99 | 2.3163 |
| 2.4346 | 2.0 | 198 | 2.2918 |
| 2.3377 | 3.0 | 297 | 2.2345 |
| 2.2953 | 4.0 | 396 | 2.1463 |
| 2.2296 | 5.0 | 495 | 2.1761 |
| 2.2235 | 6.0 | 594 | 2.0721 |
| 2.1878 | 7.0 | 693 | 2.1460 |
| 2.1569 | 8.0 | 792 | 2.0856 |
| 2.1455 | 9.0 | 891 | 2.1039 |
| 2.1391 | 10.0 | 990 | 2.1112 |
| 2.1056 | 11.0 | 1089 | 2.0694 |
| 2.1076 | 12.0 | 1188 | 2.0501 |
| 2.0919 | 13.0 | 1287 | 2.0484 |
| 2.0669 | 14.0 | 1386 | 2.0342 |
| 2.0595 | 15.0 | 1485 | 2.0802 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/SAE-bert-base-uncased | 8736441c1b4dbacc0344e3c07dfa22cd5ce56f83 | 2022-02-09T15:33:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jgammack | null | jgammack/SAE-bert-base-uncased | 1 | null | transformers | 29,740 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SAE-bert-base-uncased
results: []
widget:
- text: "Wind [MASK] was detected coming from the car door closure system."
example_title: "Closure system"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [jgammack/SAE-door-abstracts](https://huggingface.co/datasets/jgammack/SAE-door-abstracts) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5967 | 1.0 | 80 | 2.3409 |
| 2.4881 | 2.0 | 160 | 2.2707 |
| 2.3567 | 3.0 | 240 | 2.3134 |
| 2.3413 | 4.0 | 320 | 2.2592 |
| 2.3006 | 5.0 | 400 | 2.2351 |
| 2.2568 | 6.0 | 480 | 2.2556 |
| 2.2303 | 7.0 | 560 | 2.2546 |
| 2.1892 | 8.0 | 640 | 2.1868 |
| 2.1851 | 9.0 | 720 | 2.2073 |
| 2.1738 | 10.0 | 800 | 2.1344 |
| 2.1673 | 11.0 | 880 | 2.1927 |
| 2.1518 | 12.0 | 960 | 2.1844 |
| 2.1142 | 13.0 | 1040 | 2.1466 |
| 2.1343 | 14.0 | 1120 | 2.2024 |
| 2.1332 | 15.0 | 1200 | 2.1035 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/SAE-distilbert-base-uncased-squad | 643c65ecb514e10399981e69e6a35511d575ab89 | 2022-02-08T04:03:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | jgammack | null | jgammack/SAE-distilbert-base-uncased-squad | 1 | null | transformers | 29,741 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: SAE-distilbert-base-uncased-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-distilbert-base-uncased-squad
This model is a fine-tuned version of [jgammack/SAE-distilbert-base-uncased](https://huggingface.co/jgammack/SAE-distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/SAE-roberta-base | 1385e83c0b2cec98aa016c0d408787f7ca482ef0 | 2022-02-07T22:14:50.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jgammack | null | jgammack/SAE-roberta-base | 1 | null | transformers | 29,742 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: SAE-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9847 | 1.0 | 79 | 1.8238 |
| 1.9142 | 2.0 | 158 | 1.8299 |
| 1.8613 | 3.0 | 237 | 1.7636 |
| 1.8384 | 4.0 | 316 | 1.8048 |
| 1.8193 | 5.0 | 395 | 1.7734 |
| 1.7985 | 6.0 | 474 | 1.7271 |
| 1.7758 | 7.0 | 553 | 1.8525 |
| 1.7611 | 8.0 | 632 | 1.7716 |
| 1.7599 | 9.0 | 711 | 1.7913 |
| 1.7118 | 10.0 | 790 | 1.7578 |
| 1.7003 | 11.0 | 869 | 1.7598 |
| 1.7072 | 12.0 | 948 | 1.6942 |
| 1.6511 | 13.0 | 1027 | 1.6955 |
| 1.6802 | 14.0 | 1106 | 1.7837 |
| 1.7048 | 15.0 | 1185 | 1.7377 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/multi-qa-MTL-distilbert-base-uncased-40k | a96a9275663fa03149c0347fbfcb7462766fb316 | 2022-02-12T14:14:47.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jgammack | null | jgammack/multi-qa-MTL-distilbert-base-uncased-40k | 1 | null | sentence-transformers | 29,743 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-MTL-distilbert-base-uncased-40k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
model = AutoModel.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-MTL-distilbert-base-uncased-40k)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jgammack/multi-qa-MTL-distilbert-base-uncased | 2978bbe5d05ecf9e49b5d37720688b220412325e | 2022-02-12T03:52:06.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jgammack | null | jgammack/multi-qa-MTL-distilbert-base-uncased | 1 | null | sentence-transformers | 29,744 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-MTL-distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-MTL-distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased')
model = AutoModel.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-MTL-distilbert-base-uncased)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jgammack/multi-qa-SAE-distilbert-base-uncased | 7c015b17d711fefd4e7a8a1f6e1bdb20beb5f7a0 | 2022-02-11T19:45:37.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jgammack | null | jgammack/multi-qa-SAE-distilbert-base-uncased | 1 | null | sentence-transformers | 29,745 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-SAE-distilbert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-SAE-distilbert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-SAE-distilbert-base')
model = AutoModel.from_pretrained('jgammack/multi-qa-SAE-distilbert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-SAE-distilbert-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jgammack/multi-qa-distilbert-base-uncased | 47ec10d48cedfc6558fb7ccb39f974cfdd23ad16 | 2022-02-11T23:40:41.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jgammack | null | jgammack/multi-qa-distilbert-base-uncased | 1 | null | sentence-transformers | 29,746 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
model = AutoModel.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-distilbert-base-uncased)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ji-xin/bert_base-MNLI-two_stage | 80f9b3325dbe9be4452ad6960b93190737773cb0 | 2020-07-08T14:51:18.000Z | [
"pytorch",
"transformers"
] | null | false | ji-xin | null | ji-xin/bert_base-MNLI-two_stage | 1 | null | transformers | 29,747 | Entry not found |
ji-xin/bert_base-QQP-two_stage | 64a6abd410d5437de2cefa192bf0cf0083c2cf90 | 2020-07-08T14:53:42.000Z | [
"pytorch",
"transformers"
] | null | false | ji-xin | null | ji-xin/bert_base-QQP-two_stage | 1 | null | transformers | 29,748 | Entry not found |
ji-xin/bert_large-SST2-two_stage | 73824ae884c7830b4e197075c93eceb4ebb19ded | 2020-07-08T15:00:26.000Z | [
"pytorch",
"transformers"
] | null | false | ji-xin | null | ji-xin/bert_large-SST2-two_stage | 1 | null | transformers | 29,749 | Entry not found |
jial/Trove-BERT-AR | 27c56719144ac10a7dd7cacb71cd200bea458a31 | 2021-07-22T19:20:49.000Z | [
"pytorch",
"tensorboard",
"bart",
"text-generation",
"transformers"
] | text-generation | false | jial | null | jial/Trove-BERT-AR | 1 | null | transformers | 29,750 | Entry not found |
jieun/tempBERT | 50e3dd413154a246734d72e73f642231d2c7bda3 | 2021-03-15T09:53:28.000Z | [
"pytorch",
"tf"
] | null | false | jieun | null | jieun/tempBERT | 1 | null | null | 29,751 | Entry not found |
jimregan/BERTreach-finetuned-ner | 9d389ba9d4ff6b7b4084e693c796d6c18c42625d | 2021-12-01T20:37:42.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"ga",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"irish",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | jimregan | null | jimregan/BERTreach-finetuned-ner | 1 | null | transformers | 29,752 | ---
license: apache-2.0
language: ga
tags:
- generated_from_trainer
- irish
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERTreach-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: ga
metrics:
- name: Precision
type: precision
value: 0.5200517464424321
- name: Recall
type: recall
value: 0.5667293233082706
- name: F1
type: f1
value: 0.5423881268270744
- name: Accuracy
type: accuracy
value: 0.8365605828220859
widget:
- text: "Saolaíodh Pádraic Ó Conaire i nGaillimh sa bhliain 1882."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTreach-finetuned-ner
This model is a fine-tuned version of [jimregan/BERTreach](https://huggingface.co/jimregan/BERTreach) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4944
- Precision: 0.5201
- Recall: 0.5667
- F1: 0.5424
- Accuracy: 0.8366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 63 | 0.7249 | 0.3645 | 0.3905 | 0.3770 | 0.7584 |
| No log | 2.0 | 126 | 0.5850 | 0.4529 | 0.4948 | 0.4729 | 0.8072 |
| No log | 3.0 | 189 | 0.5192 | 0.4949 | 0.5456 | 0.5190 | 0.8288 |
| No log | 4.0 | 252 | 0.5042 | 0.5208 | 0.5592 | 0.5393 | 0.8348 |
| No log | 5.0 | 315 | 0.4944 | 0.5201 | 0.5667 | 0.5424 | 0.8366 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jimregan/wav2vec2-large-xlsr-latvian-cv | 245a7f95e014cc14dd1fecd882c2cb871cf3a994 | 2021-09-22T08:52:58.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lv",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jimregan | null | jimregan/wav2vec2-large-xlsr-latvian-cv | 1 | null | transformers | 29,753 | ---
language: lv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: jimregan/wav2vec2-large-xlsr-latvian-cv
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lv
type: common_voice
args: lv
metrics:
- name: Test WER
type: wer
value: 29.95
---
# Wav2Vec2-Large-XLSR-Latvian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Latvian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Latvian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
model.to("cuda")
chars_to_ignore_regex = '[,\?\.\!\;\:\"\“\%\‘\”\(\)\*\…\—\–\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.95 %
|
jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed | 5a77d25a432050db48a015a2aec0463b2d231b39 | 2021-07-06T06:59:07.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"hsb",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jimregan | null | jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed | 1 | null | transformers | 29,754 | ---
language: hsb
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Upper Sorbian mixed by Jim O'Regan
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hsb
type: common_voice
args: hsb
metrics:
- name: Test WER
type: wer
value: 43.48
---
# Wav2Vec2-Large-XLSR-Upper-Sorbian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Upper Sorbian Common Voice dataset](https://huggingface.co/datasets/common_voice), with an
extra 28 minutes of audio from an online [Sorbian course](https://sprachkurs.sorbischlernen.de/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hsb", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Upper Sorbian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ga-IE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�„«»–]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = remove_special_characters(batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 48.2 %
## Training
The Common Voice `train` and `validation` datasets were used for training, with the vocabulary from the English A1 lesson from an online [Sorbian course](https://sprachkurs.sorbischlernen.de/)
The script used for training can be found [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/upper_sorbian/fine-tune-xlsr-wav2vec2-on-upper-sorbian-asr-with-transformers.ipynb)
The script used for cleaning the transcripts of the vocabulary data is [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/upper_sorbian/sprachkurs.ipynb) |
jinbbong/kbert_base_esg_e5 | a4828f05eaabc4d6a330e299c5dc143a7658e61d | 2021-11-03T05:19:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinbbong | null | jinbbong/kbert_base_esg_e5 | 1 | null | transformers | 29,755 | Entry not found |
jinbbong/kobart-esg-e3-b32-v2 | 63591cc0202068cac0fdaf0c23e56c7fdd7d2908 | 2021-11-02T09:56:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jinbbong | null | jinbbong/kobart-esg-e3-b32-v2 | 1 | null | transformers | 29,756 | Entry not found |
jinbbong/kobert-esg-e3-v2 | 6b15605be01d6f70b881f97e2c4bbdc8cd8b645f | 2021-09-22T01:18:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinbbong | null | jinbbong/kobert-esg-e3-v2 | 1 | null | transformers | 29,757 | Entry not found |
jinbbong/kobert-esg-e3-v3 | c9c6e9924aa8ecbfffd5f5915047031416169c5a | 2021-09-23T12:06:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinbbong | null | jinbbong/kobert-esg-e3-v3 | 1 | null | transformers | 29,758 | Entry not found |
jinbbong/kobert-esg-e3 | d6a67dcde8210867dde5eee6df03fcac45b2bf3f | 2021-09-05T12:30:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinbbong | null | jinbbong/kobert-esg-e3 | 1 | null | transformers | 29,759 | Entry not found |
jinbbong/kobert-esg-e5-v2 | 55c09a4f516bb249d0d40d7d1fe96ea56cb852cd | 2021-09-26T02:44:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinbbong | null | jinbbong/kobert-esg-e5-v2 | 1 | null | transformers | 29,760 | Entry not found |
jinlmsft/t5-base-domain-detect | 319452e54d377647c31c34174dbde434ba554d10 | 2022-01-30T07:22:26.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | jinlmsft | null | jinlmsft/t5-base-domain-detect | 1 | null | transformers | 29,761 | Entry not found |
jinmang2/kobart | cf367f354b74fd8b007b7268e477d3fd556473bc | 2021-06-30T01:09:45.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | jinmang2 | null | jinmang2/kobart | 1 | 1 | transformers | 29,762 | Entry not found |
jinmang2/kobart_sum | 34ad55b36023cec4f458f5ce6db610ed950d46a2 | 2021-11-30T03:56:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jinmang2 | null | jinmang2/kobart_sum | 1 | null | transformers | 29,763 | Entry not found |
jinmang2/roberta-large-klue-re-tapt-vocab50004 | f4e01b5163d33f60b9f9738bb889282573298c1e | 2021-10-06T21:41:28.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinmang2 | null | jinmang2/roberta-large-klue-re-tapt-vocab50004 | 1 | null | transformers | 29,764 | Entry not found |
jinzhan/jzmodel01 | e885265d1fab35be24dfb3ccf7a7c4123d71b078 | 2021-11-02T04:28:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinzhan | null | jinzhan/jzmodel01 | 1 | null | transformers | 29,765 | Entry not found |
jiobiala24/wav2vec2-base-checkpoint-10 | a2a43b299e4a7e1666a91229528b7cd36e3aff18 | 2022-01-30T16:10:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-10 | 1 | null | transformers | 29,766 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-10
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-9](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-9) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9567
- Wer: 0.3292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2892 | 1.62 | 1000 | 0.5745 | 0.3467 |
| 0.235 | 3.23 | 2000 | 0.6156 | 0.3423 |
| 0.1782 | 4.85 | 3000 | 0.6299 | 0.3484 |
| 0.1504 | 6.46 | 4000 | 0.6475 | 0.3446 |
| 0.133 | 8.08 | 5000 | 0.6753 | 0.3381 |
| 0.115 | 9.69 | 6000 | 0.7834 | 0.3529 |
| 0.101 | 11.31 | 7000 | 0.7924 | 0.3426 |
| 0.0926 | 12.92 | 8000 | 0.7887 | 0.3465 |
| 0.0863 | 14.54 | 9000 | 0.7674 | 0.3439 |
| 0.0788 | 16.16 | 10000 | 0.8648 | 0.3435 |
| 0.0728 | 17.77 | 11000 | 0.8460 | 0.3395 |
| 0.0693 | 19.39 | 12000 | 0.8941 | 0.3451 |
| 0.0637 | 21.0 | 13000 | 0.9079 | 0.3356 |
| 0.0584 | 22.62 | 14000 | 0.8851 | 0.3336 |
| 0.055 | 24.23 | 15000 | 0.9400 | 0.3338 |
| 0.0536 | 25.85 | 16000 | 0.9387 | 0.3335 |
| 0.0481 | 27.46 | 17000 | 0.9664 | 0.3337 |
| 0.0485 | 29.08 | 18000 | 0.9567 | 0.3292 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-11.1 | 3baa8d32f45a5057e552828a9eb024c4f13d9a19 | 2022-02-07T19:33:31.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-11.1 | 1 | null | transformers | 29,767 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-11.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-11.1
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-10](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-10) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0173
- Wer: 0.3350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2788 | 1.52 | 1000 | 0.5776 | 0.3410 |
| 0.2277 | 3.04 | 2000 | 0.6148 | 0.3465 |
| 0.1772 | 4.56 | 3000 | 0.6497 | 0.3497 |
| 0.1528 | 6.08 | 4000 | 0.6786 | 0.3430 |
| 0.1285 | 7.6 | 5000 | 0.6779 | 0.3489 |
| 0.1104 | 9.12 | 6000 | 0.7417 | 0.3528 |
| 0.0965 | 10.64 | 7000 | 0.7956 | 0.3477 |
| 0.0914 | 12.16 | 8000 | 0.7994 | 0.3570 |
| 0.082 | 13.68 | 9000 | 0.8690 | 0.3510 |
| 0.0788 | 15.2 | 10000 | 0.8569 | 0.3526 |
| 0.0727 | 16.72 | 11000 | 0.8885 | 0.3440 |
| 0.0656 | 18.24 | 12000 | 0.9586 | 0.3476 |
| 0.0608 | 19.76 | 13000 | 0.9317 | 0.3495 |
| 0.0588 | 21.28 | 14000 | 0.9809 | 0.3449 |
| 0.0547 | 22.8 | 15000 | 0.9552 | 0.3421 |
| 0.0519 | 24.32 | 16000 | 0.9782 | 0.3380 |
| 0.0474 | 25.84 | 17000 | 0.9923 | 0.3386 |
| 0.046 | 27.36 | 18000 | 0.9984 | 0.3347 |
| 0.045 | 28.88 | 19000 | 1.0173 | 0.3350 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-2 | e236f84ddbe98d30a4b03ef8dcea8410fb7db08e | 2022-01-07T06:08:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-2 | 1 | null | transformers | 29,768 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-TPU-cv-fine-tune-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-TPU-cv-fine-tune-2
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-TPU-cv-fine-tune](https://huggingface.co/jiobiala24/wav2vec2-base-TPU-cv-fine-tune) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6051
- Wer: 0.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.522 | 6.45 | 400 | 1.2550 | 0.5649 |
| 0.2874 | 12.9 | 800 | 1.4235 | 0.6054 |
| 0.152 | 19.35 | 1200 | 1.5743 | 0.5806 |
| 0.0857 | 25.8 | 1600 | 1.6051 | 0.5484 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-3 | fcbe45421065e54a7f07e8f4bca943eb90dffd28 | 2022-01-14T02:59:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-3 | 1 | null | transformers | 29,769 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-3
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-2](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-2) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7007
- Wer: 0.5514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.358 | 14.8 | 400 | 1.4841 | 0.5338 |
| 0.1296 | 29.62 | 800 | 1.7007 | 0.5514 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-4 | 63922dd19cb5efc27e3833aca4a0536f838fd521 | 2022-01-15T12:59:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-4 | 1 | null | transformers | 29,770 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-4
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-3](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-3) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-8 | c159437219ee5cda74a3fa7b10e2f5be76a65b78 | 2022-01-24T01:26:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-8 | 1 | null | transformers | 29,771 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-8
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-7.1](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-7.1) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9561
- Wer: 0.3271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3117 | 1.59 | 1000 | 0.5514 | 0.3451 |
| 0.2509 | 3.19 | 2000 | 0.5912 | 0.3328 |
| 0.1918 | 4.78 | 3000 | 0.6103 | 0.3346 |
| 0.1612 | 6.38 | 4000 | 0.6469 | 0.3377 |
| 0.1388 | 7.97 | 5000 | 0.6597 | 0.3391 |
| 0.121 | 9.57 | 6000 | 0.6911 | 0.3472 |
| 0.1096 | 11.16 | 7000 | 0.7300 | 0.3457 |
| 0.0959 | 12.76 | 8000 | 0.7660 | 0.3400 |
| 0.0882 | 14.35 | 9000 | 0.8316 | 0.3394 |
| 0.0816 | 15.95 | 10000 | 0.8042 | 0.3357 |
| 0.0739 | 17.54 | 11000 | 0.8087 | 0.3346 |
| 0.0717 | 19.14 | 12000 | 0.8590 | 0.3353 |
| 0.066 | 20.73 | 13000 | 0.8750 | 0.3336 |
| 0.0629 | 22.33 | 14000 | 0.8759 | 0.3333 |
| 0.0568 | 23.92 | 15000 | 0.8963 | 0.3321 |
| 0.0535 | 25.52 | 16000 | 0.9391 | 0.3323 |
| 0.0509 | 27.11 | 17000 | 0.9279 | 0.3296 |
| 0.0498 | 28.71 | 18000 | 0.9561 | 0.3271 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jldnunna369/t5-small-finetuned-xsum | 41b9a517431d939968d415ed7bb530e0bd442de8 | 2022-01-07T07:26:12.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jldnunna369 | null | jldnunna369/t5-small-finetuned-xsum | 1 | null | transformers | 29,772 | Entry not found |
joaomiguel26/xlm-roberta-11-final | 09fbb7f7420cb58de65f2d6ecb4059b5dbb61f28 | 2021-12-06T16:29:34.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | joaomiguel26 | null | joaomiguel26/xlm-roberta-11-final | 1 | null | transformers | 29,773 | Entry not found |
joaomiguel26/xlm-roberta-9-final | 33e325363950c604c13493157167d90d12a05542 | 2021-12-06T16:24:37.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | joaomiguel26 | null | joaomiguel26/xlm-roberta-9-final | 1 | null | transformers | 29,774 | Entry not found |
joe8zhang/dummy-model | 5357ca46ab87a98c64e0ba5540e67e9d3b9aa204 | 2021-06-24T01:01:44.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | joe8zhang | null | joe8zhang/dummy-model | 1 | null | transformers | 29,775 | Entry not found |
jogonba2/bart-JES-xsum | d7800faca3f05b7b02530cd582ffd6c7a09f3ae2 | 2021-10-13T09:15:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jogonba2 | null | jogonba2/bart-JES-xsum | 1 | null | transformers | 29,776 | Entry not found |
jogonba2/barthez-deft-archeologie | 4b00adaec2ef98884b2b8f668619bb99348f7d27 | 2022-04-14T14:04:35.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jogonba2 | null | jogonba2/barthez-deft-archeologie | 1 | null | transformers | 29,777 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: barthez-deft-archeologie
results:
- task:
name: Summarization
type: summarization
metrics:
- name: Rouge1
type: rouge
value: 37.1845
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-archeologie
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 2.0733
- Rouge1: 37.1845
- Rouge2: 16.9534
- Rougel: 28.8416
- Rougelsum: 29.077
- Gen Len: 34.4028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.4832 | 1.0 | 108 | 2.4237 | 22.6662 | 10.009 | 19.8729 | 19.8814 | 15.8333 |
| 2.557 | 2.0 | 216 | 2.2328 | 24.8102 | 11.9911 | 20.4773 | 20.696 | 19.0139 |
| 2.2702 | 3.0 | 324 | 2.2002 | 25.6482 | 11.6191 | 21.8383 | 21.9341 | 18.1944 |
| 2.1119 | 4.0 | 432 | 2.1266 | 25.5806 | 11.9765 | 21.3973 | 21.3503 | 19.4306 |
| 1.9582 | 5.0 | 540 | 2.1072 | 25.6578 | 12.2709 | 22.182 | 22.0548 | 19.1528 |
| 1.8137 | 6.0 | 648 | 2.1008 | 26.5272 | 11.4033 | 22.359 | 22.3259 | 19.4722 |
| 1.7725 | 7.0 | 756 | 2.1074 | 25.0405 | 11.1773 | 21.1369 | 21.1847 | 19.1806 |
| 1.6772 | 8.0 | 864 | 2.0959 | 26.5237 | 11.6028 | 22.5018 | 22.3931 | 19.3333 |
| 1.5798 | 9.0 | 972 | 2.0976 | 27.7443 | 11.9898 | 22.4052 | 22.2954 | 19.7222 |
| 1.4753 | 10.0 | 1080 | 2.0733 | 28.3502 | 12.9162 | 22.6352 | 22.6015 | 19.8194 |
| 1.4646 | 11.0 | 1188 | 2.1091 | 27.9198 | 12.8591 | 23.0718 | 23.0779 | 19.6111 |
| 1.4082 | 12.0 | 1296 | 2.1036 | 28.8509 | 13.0987 | 23.4189 | 23.5044 | 19.4861 |
| 1.2862 | 13.0 | 1404 | 2.1222 | 28.6641 | 12.8157 | 22.6799 | 22.7051 | 19.8611 |
| 1.2612 | 14.0 | 1512 | 2.1487 | 26.9709 | 11.6084 | 22.0312 | 22.0543 | 19.875 |
| 1.2327 | 15.0 | 1620 | 2.1808 | 28.218 | 12.6239 | 22.7372 | 22.7881 | 19.7361 |
| 1.2264 | 16.0 | 1728 | 2.1778 | 26.7393 | 11.4474 | 21.6057 | 21.555 | 19.7639 |
| 1.1848 | 17.0 | 1836 | 2.1995 | 27.6902 | 12.1082 | 22.0406 | 22.0101 | 19.6806 |
| 1.133 | 18.0 | 1944 | 2.2038 | 27.0402 | 12.1846 | 21.7793 | 21.7513 | 19.8056 |
| 1.168 | 19.0 | 2052 | 2.2116 | 27.5149 | 11.9876 | 22.1113 | 22.1527 | 19.7222 |
| 1.1206 | 20.0 | 2160 | 2.2133 | 28.2321 | 12.677 | 22.749 | 22.8485 | 19.5972 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
jogonba2/barthez-deft-chimie | 9ade3f0b850a74a54f24639cd032df5a21a17130 | 2022-04-14T14:04:20.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jogonba2 | null | jogonba2/barthez-deft-chimie | 1 | null | transformers | 29,778 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: barthez-deft-chimie
results:
- task:
name: Summarization
type: summarization
metrics:
- name: Rouge1
type: rouge
value: 31.8947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-chimie
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 2.0710
- Rouge1: 31.8947
- Rouge2: 16.7563
- Rougel: 23.5428
- Rougelsum: 23.4918
- Gen Len: 38.5256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.8022 | 1.0 | 118 | 2.5491 | 16.8208 | 7.0027 | 13.957 | 14.0479 | 19.1538 |
| 2.9286 | 2.0 | 236 | 2.3074 | 17.5356 | 7.8717 | 14.4874 | 14.5044 | 19.9487 |
| 2.5422 | 3.0 | 354 | 2.2322 | 19.6491 | 9.4156 | 15.9467 | 15.9433 | 19.7051 |
| 2.398 | 4.0 | 472 | 2.1500 | 18.7166 | 9.859 | 15.7535 | 15.8036 | 19.9231 |
| 2.2044 | 5.0 | 590 | 2.1372 | 19.978 | 10.6235 | 16.1348 | 16.1274 | 19.6154 |
| 1.9405 | 6.0 | 708 | 2.0992 | 20.226 | 10.551 | 16.6928 | 16.7211 | 19.9744 |
| 1.8544 | 7.0 | 826 | 2.0841 | 19.8869 | 10.8456 | 16.1072 | 16.097 | 19.8846 |
| 1.7536 | 8.0 | 944 | 2.0791 | 19.3017 | 9.4921 | 16.1541 | 16.2167 | 19.859 |
| 1.6914 | 9.0 | 1062 | 2.0710 | 21.3848 | 10.4088 | 17.1963 | 17.2254 | 19.8846 |
| 1.654 | 10.0 | 1180 | 2.1069 | 22.3811 | 10.7987 | 18.7595 | 18.761 | 19.9231 |
| 1.5899 | 11.0 | 1298 | 2.0919 | 20.8546 | 10.6958 | 16.8637 | 16.9499 | 19.8077 |
| 1.4661 | 12.0 | 1416 | 2.1065 | 22.3677 | 11.7472 | 18.262 | 18.3 | 19.9744 |
| 1.4205 | 13.0 | 1534 | 2.1164 | 20.5845 | 10.7825 | 16.9972 | 17.0216 | 19.9359 |
| 1.3797 | 14.0 | 1652 | 2.1240 | 22.2561 | 11.303 | 17.5064 | 17.5815 | 19.9744 |
| 1.3724 | 15.0 | 1770 | 2.1187 | 23.2825 | 11.912 | 18.5208 | 18.5499 | 19.9359 |
| 1.3404 | 16.0 | 1888 | 2.1394 | 22.1305 | 10.5258 | 17.772 | 17.8202 | 19.9744 |
| 1.2846 | 17.0 | 2006 | 2.1502 | 21.567 | 11.0557 | 17.2562 | 17.2974 | 20.0 |
| 1.2871 | 18.0 | 2124 | 2.1572 | 22.5871 | 11.702 | 18.2906 | 18.3826 | 19.9744 |
| 1.2422 | 19.0 | 2242 | 2.1613 | 23.0935 | 11.6824 | 18.6087 | 18.6777 | 19.9744 |
| 1.2336 | 20.0 | 2360 | 2.1581 | 22.6789 | 11.4363 | 18.1661 | 18.2346 | 19.9487 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
jogonba2/barthez-deft-linguistique | 5e6c6766f2d028b37e1a47870f01c5e8bdc257b7 | 2022-04-14T14:04:46.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jogonba2 | null | jogonba2/barthez-deft-linguistique | 1 | null | transformers | 29,779 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: barthez-deft-linguistique
results:
- task:
name: Summarization
type: summarization
metrics:
- name: Rouge1
type: rouge
value: 41.989
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-linguistique
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 1.7596
- Rouge1: 41.989
- Rouge2: 22.4524
- Rougel: 32.7966
- Rougelsum: 32.7953
- Gen Len: 22.1549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.0569 | 1.0 | 108 | 2.0282 | 31.6993 | 14.9483 | 25.5565 | 25.4379 | 18.3803 |
| 2.2892 | 2.0 | 216 | 1.8553 | 35.2563 | 18.019 | 28.3135 | 28.2927 | 18.507 |
| 1.9062 | 3.0 | 324 | 1.7696 | 37.4613 | 18.1488 | 28.9959 | 29.0134 | 19.5352 |
| 1.716 | 4.0 | 432 | 1.7641 | 37.6903 | 18.7496 | 30.1097 | 30.1027 | 18.9577 |
| 1.5722 | 5.0 | 540 | 1.7781 | 38.1013 | 19.8291 | 29.8142 | 29.802 | 19.169 |
| 1.4655 | 6.0 | 648 | 1.7661 | 38.3557 | 20.3309 | 30.5068 | 30.4728 | 19.3662 |
| 1.3507 | 7.0 | 756 | 1.7596 | 39.7409 | 20.2998 | 31.0849 | 31.1152 | 19.3944 |
| 1.2874 | 8.0 | 864 | 1.7706 | 37.7846 | 20.3457 | 30.6826 | 30.6321 | 19.4789 |
| 1.2641 | 9.0 | 972 | 1.7848 | 38.7421 | 19.5701 | 30.5798 | 30.6305 | 19.3944 |
| 1.1192 | 10.0 | 1080 | 1.8008 | 40.3313 | 20.3378 | 31.8325 | 31.8648 | 19.5493 |
| 1.0724 | 11.0 | 1188 | 1.8450 | 38.9612 | 20.5719 | 31.4496 | 31.3144 | 19.8592 |
| 1.0077 | 12.0 | 1296 | 1.8364 | 36.5997 | 18.46 | 29.1808 | 29.1705 | 19.7324 |
| 0.9362 | 13.0 | 1404 | 1.8677 | 38.0371 | 19.2321 | 30.3893 | 30.3926 | 19.6338 |
| 0.8868 | 14.0 | 1512 | 1.9154 | 36.4737 | 18.5314 | 29.325 | 29.3634 | 19.6479 |
| 0.8335 | 15.0 | 1620 | 1.9344 | 35.7583 | 18.0687 | 27.9666 | 27.8675 | 19.8028 |
| 0.8305 | 16.0 | 1728 | 1.9556 | 37.2137 | 18.2199 | 29.5959 | 29.5799 | 19.9577 |
| 0.8057 | 17.0 | 1836 | 1.9793 | 36.6834 | 17.8505 | 28.6701 | 28.7145 | 19.7324 |
| 0.7869 | 18.0 | 1944 | 1.9994 | 37.5918 | 19.1984 | 28.8569 | 28.8278 | 19.7606 |
| 0.7549 | 19.0 | 2052 | 2.0117 | 37.3278 | 18.5169 | 28.778 | 28.7737 | 19.8028 |
| 0.7497 | 20.0 | 2160 | 2.0189 | 37.7513 | 19.1813 | 29.3675 | 29.402 | 19.6901 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
jogonba2/barthez-deft-sciences_de_l_information | a4e92bdcc102c87d28e1ed1e0d12d13c47735aa1 | 2022-04-14T14:04:56.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jogonba2 | null | jogonba2/barthez-deft-sciences_de_l_information | 1 | null | transformers | 29,780 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: barthez-deft-sciences_de_l_information
results:
- task:
name: Summarization
type: summarization
metrics:
- name: Rouge1
type: rouge
value: 34.5672
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-sciences_de_l_information
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 2.0258
- Rouge1: 34.5672
- Rouge2: 16.7861
- Rougel: 27.5573
- Rougelsum: 27.6099
- Gen Len: 17.8857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.3405 | 1.0 | 106 | 2.3682 | 31.3511 | 12.1973 | 25.6977 | 25.6851 | 14.9714 |
| 2.4219 | 2.0 | 212 | 2.1891 | 30.1154 | 13.3459 | 25.4854 | 25.5403 | 14.0429 |
| 2.0789 | 3.0 | 318 | 2.0994 | 32.153 | 15.3865 | 26.1859 | 26.1672 | 15.2 |
| 1.869 | 4.0 | 424 | 2.0258 | 34.5797 | 16.4194 | 27.6909 | 27.7201 | 16.9857 |
| 1.6569 | 5.0 | 530 | 2.0417 | 34.3854 | 16.5237 | 28.7036 | 28.8258 | 15.2429 |
| 1.5414 | 6.0 | 636 | 2.0503 | 33.1768 | 15.4851 | 27.2818 | 27.2884 | 16.0143 |
| 1.4461 | 7.0 | 742 | 2.0293 | 35.4273 | 16.118 | 27.3622 | 27.393 | 16.6857 |
| 1.3435 | 8.0 | 848 | 2.0336 | 35.3471 | 15.9695 | 27.668 | 27.6749 | 17.2 |
| 1.2624 | 9.0 | 954 | 2.0779 | 35.9201 | 17.2547 | 27.409 | 27.3293 | 17.1857 |
| 1.1807 | 10.0 | 1060 | 2.1301 | 35.7061 | 15.9138 | 27.3968 | 27.4716 | 17.1286 |
| 1.0972 | 11.0 | 1166 | 2.1726 | 34.3194 | 16.1313 | 27.0367 | 27.0737 | 17.1429 |
| 1.0224 | 12.0 | 1272 | 2.1704 | 34.9278 | 16.7958 | 27.8754 | 27.932 | 16.6571 |
| 1.0181 | 13.0 | 1378 | 2.2458 | 34.472 | 15.9111 | 28.2938 | 28.2946 | 16.7571 |
| 0.9769 | 14.0 | 1484 | 2.3405 | 35.1592 | 16.3135 | 29.0956 | 29.0858 | 16.5429 |
| 0.8866 | 15.0 | 1590 | 2.3303 | 34.8732 | 15.6709 | 27.5858 | 27.6169 | 16.2429 |
| 0.8888 | 16.0 | 1696 | 2.2976 | 35.3034 | 16.8011 | 27.7988 | 27.7569 | 17.5143 |
| 0.8358 | 17.0 | 1802 | 2.3349 | 35.505 | 16.8851 | 28.3651 | 28.413 | 16.8143 |
| 0.8026 | 18.0 | 1908 | 2.3738 | 35.2328 | 17.0358 | 28.544 | 28.6211 | 16.6143 |
| 0.7487 | 19.0 | 2014 | 2.4103 | 34.0793 | 15.4468 | 27.8057 | 27.8586 | 16.7286 |
| 0.7722 | 20.0 | 2120 | 2.3991 | 34.8116 | 15.8706 | 27.9173 | 27.983 | 16.9286 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
jollmimmim/DialoGPT-small-monkeydluffy | 5e4741f459ce1dd65cf2f7f664ff6bc071cb8d7b | 2021-09-04T03:41:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jollmimmim | null | jollmimmim/DialoGPT-small-monkeydluffy | 1 | null | transformers | 29,781 | ---
tags:
- conversational
---
# Monkey D Luffy DialoGPT Model |
jonasurth/T5Sum | 81135c59bdbe22472b2a60dd88b3a282cc42b493 | 2021-06-23T12:27:45.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jonasurth | null | jonasurth/T5Sum | 1 | null | transformers | 29,782 | Entry not found |
joseangelatm/PanamianModel | 2efd76604c17f3d851b32267c7274b534020b175 | 2021-05-20T17:22:40.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | joseangelatm | null | joseangelatm/PanamianModel | 1 | null | transformers | 29,783 | Entry not found |
joseangelatm/spanishpanama | 07d5cd9f01cb6bc7277363fcccbbba51534bb24e | 2021-05-20T17:25:08.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | joseangelatm | null | joseangelatm/spanishpanama | 1 | null | transformers | 29,784 | Entry not found |
joykirat/bert-base-uncased-finetuned-swag | 36c9f889a23b7eb1fef5c43d6c46193c10b83a54 | 2022-02-06T11:11:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | joykirat | null | joykirat/bert-base-uncased-finetuned-swag | 1 | null | transformers | 29,785 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
|
jppaolim/homerGPT2L | 0c599c277c64ffe464f25468589ead861b98a5f0 | 2021-05-23T06:09:00.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/homerGPT2L | 1 | null | transformers | 29,786 | Second model for storytelling
|
js-rockstar/urdu-colab | 5dec3dae0d12bb11a764e47f1de3edc47cdc00cc | 2021-12-13T05:28:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | js-rockstar | null | js-rockstar/urdu-colab | 1 | null | transformers | 29,787 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: urdu-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# urdu-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jsm33/d4-adaptation | 697cfd41ab88bd4f46fb0460942ad1a15b8d1b88 | 2021-05-30T18:31:03.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | jsm33 | null | jsm33/d4-adaptation | 1 | null | transformers | 29,788 | Entry not found |
jsm33/irony-classifier | cf2831787ce92d552ad530000a190a9286b5629f | 2021-05-20T17:26:40.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | jsm33 | null | jsm33/irony-classifier | 1 | null | transformers | 29,789 | Entry not found |
jth1903/DialoGPT-small-rick | 9a2b43c44622af4afa9f823dda95cd65af89b138 | 2021-09-10T19:10:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jth1903 | null | jth1903/DialoGPT-small-rick | 1 | null | transformers | 29,790 | ---
tags:
- conversational
---
# Rick dialoGPT Model |
juierror/wav2vec2-large-xls-r-thai-test | 64437ea88d3bd5b76dde85ccfa76551bcb83d2c8 | 2022-01-02T14:18:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | juierror | null | juierror/wav2vec2-large-xls-r-thai-test | 1 | null | transformers | 29,791 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-thai-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-thai-test
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7728
- eval_wer: 0.9490
- eval_runtime: 678.2819
- eval_samples_per_second: 3.226
- eval_steps_per_second: 0.404
- epoch: 2.56
- step: 600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
julien-c/flair-de-ner | 7f979744b5459a5f289d6d9181ab07bfe84229e9 | 2020-11-26T21:59:38.000Z | [
"pytorch",
"de",
"dataset:conll2003",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | julien-c | null | julien-c/flair-de-ner | 1 | null | flair | 29,792 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
datasets:
- conll2003
inference: false
---
## Flair NER model `de-ner-conll03-v0.4.pt`
Imported from https://nlp.informatik.hu-berlin.de/resources/models/de-ner/
### Demo: How to use in Flair
```python
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence(
"Mein Name ist Julien, ich lebe zurzeit in Paris, ich arbeite bei Hugging Face, Inc."
)
tagger = SequenceTagger.load("julien-c/flair-de-ner")
# predict NER tags
tagger.predict(sentence)
# print sentence with predicted tags
print(sentence.to_tagged_string())
```
yields the following output:
> `Mein Name ist Julien <S-PER> , ich lebe zurzeit in Paris <S-LOC> , ich arbeite bei Hugging <B-ORG> Face <E-ORG> , Inc <S-ORG> .`
### Thanks [@stefan-it](https://huggingface.co/stefan-it) for the Flair integration ❤️ 🔥
|
julien-c/t5-3b-fork2 | fe1af21adebe29e34200f84f593a9bb97a7042ea | 2020-11-20T15:55:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | julien-c | null | julien-c/t5-3b-fork2 | 1 | null | transformers | 29,793 | Entry not found |
juliensimon/dummy-model | 5df8ae0da3410118348abf83342fa58355242525 | 2021-10-07T08:09:33.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | juliensimon | null | juliensimon/dummy-model | 1 | null | transformers | 29,794 | Entry not found |
junnyu/autobert-small-sdconv | db532531b40bc69e9d756bd2f6daa9cca76011da | 2021-08-02T13:47:50.000Z | [
"pytorch",
"autobert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/autobert-small-sdconv | 1 | null | transformers | 29,795 | Entry not found |
junzai/123 | 97474b095c83e3d6c9cac5ed917c92e4d79127e3 | 2022-02-08T09:20:28.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junzai | null | junzai/123 | 1 | null | transformers | 29,796 | Entry not found |
junzai/222 | ca680969f4eacaffc2bbee0ddb425058b3cb0719 | 2022-02-10T05:07:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junzai | null | junzai/222 | 1 | null | transformers | 29,797 | Entry not found |
junzai/ai12 | 7f7b94925c3a248d7dee8d0dcf5dd28ac3a46188 | 2022-02-10T06:52:42.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junzai | null | junzai/ai12 | 1 | null | transformers | 29,798 | Entry not found |
junzai/bert_finetuning_test1227_hug | f7c8f1de07156aa8f6093f0651c1d294486b9608 | 2021-12-27T05:20:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junzai | null | junzai/bert_finetuning_test1227_hug | 1 | null | transformers | 29,799 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.