modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
efederici/sentence-it5-small | c96933e53fe75a9cb17ad32203f0926847d8f077 | 2022-03-29T17:29:14.000Z | [
"pytorch",
"t5",
"it",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | efederici | null | efederici/sentence-it5-small | 4 | null | sentence-transformers | 19,200 | ---
pipeline_tag: sentence-similarity
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-IT5-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-small)) small model trained for asymmetric semantic search. Query is a keyword, Paragraph is a short news article.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-IT5-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-small')
model = AutoModel.from_pretrained('efederici/sentence-IT5-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_augmented | 389b4d7a23f33a7d1c8d6a031403ff95a4d71a0b | 2022-03-28T12:29:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_toy_train_data_augmented | 4 | null | transformers | 19,201 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_augmented
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5016
- Wer: 0.4656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.418 | 1.05 | 250 | 3.4171 | 1.0 |
| 3.0886 | 2.1 | 500 | 3.4681 | 1.0 |
| 2.9422 | 3.15 | 750 | 2.6151 | 1.0 |
| 1.3195 | 4.2 | 1000 | 0.8789 | 0.7739 |
| 0.9154 | 5.25 | 1250 | 0.6364 | 0.6518 |
| 0.6519 | 6.3 | 1500 | 0.5682 | 0.5949 |
| 0.5622 | 7.35 | 1750 | 0.5273 | 0.5625 |
| 0.4965 | 8.4 | 2000 | 0.4891 | 0.5283 |
| 0.4283 | 9.45 | 2250 | 0.5018 | 0.5260 |
| 0.4019 | 10.5 | 2500 | 0.5016 | 0.5006 |
| 0.3585 | 11.55 | 2750 | 0.5047 | 0.5003 |
| 0.3275 | 12.6 | 3000 | 0.5148 | 0.4866 |
| 0.3427 | 13.65 | 3250 | 0.5035 | 0.4786 |
| 0.3229 | 14.7 | 3500 | 0.4855 | 0.4768 |
| 0.3332 | 15.75 | 3750 | 0.5040 | 0.4769 |
| 0.2861 | 16.81 | 4000 | 0.5138 | 0.4669 |
| 0.3029 | 17.86 | 4250 | 0.5133 | 0.4670 |
| 0.2633 | 18.91 | 4500 | 0.5063 | 0.4637 |
| 0.2621 | 19.96 | 4750 | 0.5016 | 0.4656 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
istassiy/ysda_2022_ml2_hw3_distilbert_base_uncased | 16e63922f0f42affdb6964ac7ee0e5e5285b38b2 | 2022-03-29T02:00:27.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | istassiy | null | istassiy/ysda_2022_ml2_hw3_distilbert_base_uncased | 4 | null | transformers | 19,202 | Entry not found |
robvanderg/Sem-RemmmBERT | 3aaf8a8348c7d0cb549fc590443bfb169a226339 | 2022-03-28T11:29:41.000Z | [
"pytorch",
"rembert",
"feature-extraction",
"multilingual",
"dataset:SemEval 2022",
"transformers",
"STILT",
"retraining",
"multi-task learning"
] | feature-extraction | false | robvanderg | null | robvanderg/Sem-RemmmBERT | 4 | null | transformers | 19,203 | ---
language:
- multilingual
tags:
- STILT
- retraining
- multi-task learning
datasets:
- SemEval 2022
---
## Sem-RemmmBERT
This is the SemEval MaChAmp Multitask Multilingual BERT model. This model is retrained from remBERT (https://huggingface.co/google/rembertased).
The retraining is done based on all SemEval 2022 tasks that are text based, and have annotation on the word, sentence or paragraph level. The retraining is done with MaChAmp (https://machamp-nlp.github.io/), a toolkit focusing on multi-task learning for NLP. More information can be found in the paper (which should be released when the SemEval proceedings are online). |
nqcccccc/phobert-uit-absa-qab | 1ccb85fc6f052287dc2a180cb39258e4725347e9 | 2022-04-01T12:45:45.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | nqcccccc | null | nqcccccc/phobert-uit-absa-qab | 4 | null | transformers | 19,204 | Entry not found |
DrishtiSharma/xls-r-es-test-lm-finetuned-sentiment-mesd | 56dcce7c4a8fb9d3cedd3da2d49cbdbb80f2b911 | 2022-03-28T19:03:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | DrishtiSharma | null | DrishtiSharma/xls-r-es-test-lm-finetuned-sentiment-mesd | 4 | null | transformers | 19,205 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xls-r-es-test-lm-finetuned-sentiment-mesd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-es-test-lm-finetuned-sentiment-mesd
This model is a fine-tuned version of [glob-asr/xls-r-es-test-lm](https://huggingface.co/glob-asr/xls-r-es-test-lm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7851
- Accuracy: 0.2385
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.25e-05
- train_batch_size: 64
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 1.7876 | 0.1923 |
| 1.9709 | 1.86 | 6 | 1.7869 | 0.2 |
| 1.9709 | 2.86 | 9 | 1.7859 | 0.2308 |
| 2.146 | 3.86 | 12 | 1.7851 | 0.2385 |
| 1.9622 | 4.86 | 15 | 1.7842 | 0.1923 |
| 1.9622 | 5.86 | 18 | 1.7834 | 0.1769 |
| 2.137 | 6.86 | 21 | 1.7823 | 0.1923 |
| 2.137 | 7.86 | 24 | 1.7812 | 0.1923 |
| 2.1297 | 8.86 | 27 | 1.7800 | 0.1846 |
| 1.9502 | 9.86 | 30 | 1.7787 | 0.1846 |
| 1.9502 | 10.86 | 33 | 1.7772 | 0.1846 |
| 2.1234 | 11.86 | 36 | 1.7760 | 0.1846 |
| 2.1234 | 12.86 | 39 | 1.7748 | 0.1846 |
| 2.1186 | 13.86 | 42 | 1.7736 | 0.1846 |
| 1.9401 | 14.86 | 45 | 1.7725 | 0.1846 |
| 1.9401 | 15.86 | 48 | 1.7715 | 0.1923 |
| 2.112 | 16.86 | 51 | 1.7706 | 0.1923 |
| 2.112 | 17.86 | 54 | 1.7701 | 0.1923 |
| 2.1094 | 18.86 | 57 | 1.7697 | 0.2 |
| 1.934 | 19.86 | 60 | 1.7696 | 0.2 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | cadc72688129a1a16f33d0ae70da8d2b5855f640 | 2022-05-26T12:57:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"transformers",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Finnish-NLP | null | Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 4 | null | transformers | 19,206 | ---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-300m-finnish-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 8.16
- name: Test CER
type: cer
value: 1.97
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS ASR
type: google/fleurs
args: fi_fi
metrics:
- name: Test WER
type: wer
value: 17.72
- name: Test CER
type: cer
value: 6.78
---
# Wav2vec2-xls-r-300m for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [aapot/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/aapot/wav2vec2-xlsr-300m-finnish-lm) model so that model has just been copied/moved to this `Finnish-NLP` Hugging Face organization.
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (300 million parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-300m` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.973 | 0.17 | 500 | 0.5750 | 0.6844 |
| 0.713 | 0.34 | 1000 | 0.3356 | 0.4518 |
| 0.6563 | 0.5 | 1500 | 0.3007 | 0.4039 |
| 0.642 | 0.67 | 2000 | 0.2619 | 0.3674 |
| 0.6203 | 0.84 | 2500 | 0.2488 | 0.3558 |
| 0.6016 | 1.01 | 3000 | 0.2795 | 0.3835 |
| 0.5423 | 1.17 | 3500 | 0.2652 | 0.3310 |
| 0.5639 | 1.34 | 4000 | 0.2479 | 0.3462 |
| 0.586 | 1.51 | 4500 | 0.2409 | 0.3295 |
| 0.5169 | 1.68 | 5000 | 0.2728 | 0.3352 |
| 0.5176 | 1.84 | 5500 | 0.2254 | 0.3149 |
| 0.4983 | 2.01 | 6000 | 0.2169 | 0.3009 |
| 0.4982 | 2.18 | 6500 | 0.2215 | 0.3079 |
| 0.4898 | 2.35 | 7000 | 0.2174 | 0.3023 |
| 0.4922 | 2.51 | 7500 | 0.2217 | 0.3081 |
| 0.5025 | 2.68 | 8000 | 0.2002 | 0.2710 |
| 0.4745 | 2.85 | 8500 | 0.1935 | 0.2783 |
| 0.4377 | 3.02 | 9000 | 0.1859 | 0.2742 |
| 0.4511 | 3.18 | 9500 | 0.2038 | 0.2786 |
| 0.4411 | 3.35 | 10000 | 0.1863 | 0.2651 |
| 0.4501 | 3.52 | 10500 | 0.1948 | 0.2605 |
| 0.4557 | 3.69 | 11000 | 0.1872 | 0.2695 |
| 0.4493 | 3.85 | 11500 | 0.1888 | 0.2632 |
| 0.4047 | 4.02 | 12000 | 0.1818 | 0.2559 |
| 0.4319 | 4.19 | 12500 | 0.1896 | 0.2648 |
| 0.4162 | 4.36 | 13000 | 0.1953 | 0.2595 |
| 0.4046 | 4.52 | 13500 | 0.1864 | 0.2606 |
| 0.4195 | 4.69 | 14000 | 0.1843 | 0.2467 |
| 0.4146 | 4.86 | 14500 | 0.1686 | 0.2450 |
| 0.378 | 5.03 | 15000 | 0.1731 | 0.2401 |
| 0.3792 | 5.19 | 15500 | 0.1676 | 0.2325 |
| 0.3855 | 5.36 | 16000 | 0.1740 | 0.2326 |
| 0.4029 | 5.53 | 16500 | 0.1674 | 0.2345 |
| 0.386 | 5.7 | 17000 | 0.1735 | 0.2280 |
| 0.3811 | 5.86 | 17500 | 0.1692 | 0.2258 |
| 0.3607 | 6.03 | 18000 | 0.1797 | 0.2279 |
| 0.3604 | 6.2 | 18500 | 0.1651 | 0.2206 |
| 0.3362 | 6.37 | 19000 | 0.1627 | 0.2199 |
| 0.3611 | 6.53 | 19500 | 0.1652 | 0.2172 |
| 0.3671 | 6.7 | 20000 | 0.1564 | 0.2140 |
| 0.3769 | 6.87 | 20500 | 0.1525 | 0.2101 |
| 0.3539 | 7.04 | 21000 | 0.1639 | 0.2096 |
| 0.3225 | 7.21 | 21500 | 0.1611 | 0.2087 |
| 0.3323 | 7.37 | 22000 | 0.1633 | 0.2008 |
| 0.3327 | 7.54 | 22500 | 0.1692 | 0.1975 |
| 0.3456 | 7.71 | 23000 | 0.1555 | 0.1991 |
| 0.3058 | 7.88 | 23500 | 0.1590 | 0.1959 |
| 0.3034 | 8.04 | 24000 | 0.1531 | 0.1973 |
| 0.2925 | 8.21 | 24500 | 0.1583 | 0.1978 |
| 0.2967 | 8.38 | 25000 | 0.1546 | 0.1906 |
| 0.2974 | 8.55 | 25500 | 0.1540 | 0.1869 |
| 0.3131 | 8.71 | 26000 | 0.1534 | 0.1850 |
| 0.3306 | 8.88 | 26500 | 0.1482 | 0.1844 |
| 0.2842 | 9.05 | 27000 | 0.1490 | 0.1854 |
| 0.2879 | 9.22 | 27500 | 0.1463 | 0.1799 |
| 0.27 | 9.38 | 28000 | 0.1454 | 0.1798 |
| 0.2874 | 9.55 | 28500 | 0.1504 | 0.1787 |
| 0.2757 | 9.72 | 29000 | 0.1512 | 0.1784 |
| 0.3017 | 9.89 | 29500 | 0.1484 | 0.1800 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0), [Common Voice 9.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) and with the [FLEURS ASR Finnish test split](https://huggingface.co/datasets/google/fleurs).
This model's training data includes the training splits of Common Voice 7.0 but our newer `Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned` and `Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish` models include the Common Voice 9.0 so we ran tests for both Common Voice versions. Note: Common Voice doesn't seem to fully preserve the test split as fixed between the dataset versions so it is possible that some of the training examples of Common Voice 9.0 are in the test split of the Common Voice 7.0 and vice versa. Thus, Common Voice test result comparisons are not fully accurate between the models trained with different Common Voice versions but the comparison should still be meaningful enough.
### Common Voice 7.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.85 |13.52 |1.35 |2.44 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |**9.66** |0.90 |1.66 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |8.16 |17.92 |1.97 |3.36 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.65 |13.11 |1.20 |2.23 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**4.09** |9.73 |**0.88** |**1.65** |
### Common Voice 9.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm --dataset mozilla-foundation/common_voice_9_0 --config fi --split test
```
This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.93 |14.08 |1.40 |2.59 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |9.83 |0.92 |1.71 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |7.42 |16.45 |1.79 |3.07 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.35 |13.00 |1.14 |2.20 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**3.72** |**8.96** |**0.80** |**1.52** |
### FLEURS ASR testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm --dataset google/fleurs --config fi_fi --split test
```
This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |13.99 |17.16 |6.07 |6.61 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |12.44 |**14.63** |5.77 |6.22 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |17.72 |23.30 |6.78 |7.67 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |20.34 |16.67 |6.97 |6.35 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**12.11** |14.89 |**5.65** |**6.06** |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
21iridescent/RoBERTa-base-finetuned-squad2-lwt | be9067a88599d1011d3d6b7e800857b678691069 | 2022-04-06T10:42:07.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | 21iridescent | null | 21iridescent/RoBERTa-base-finetuned-squad2-lwt | 4 | 1 | transformers | 19,207 | ---
--license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad2-lwt
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
#### Finetuned on SQUAD2.0 Dataset
#### F1: 83.738696142672
Trained on single V100 GPU
Everyone is welcome to use~
Hope you have a nice day
## Performance
- HasAns_exact': 77.1255060728745, 'HasAns_f1': 83.87812741260885, 'HasAns_total': 5928,
- 'NoAns_exact': 83.59966358284272, 'NoAns_f1': 83.59966358284272, 'NoAns_total': 5945,
- 'best_exact': 80.36721974227238, 'best_exact_thresh': 0.0,
- 'best_f1': 83.7386961426719, 'best_f1_thresh': 0.0,
- 'exact': 80.36721974227238,
- 'f1': 83.738696142672,
- 'total': 11873
# roberta-base-finetuned-squad2-lwt
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9441
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.871 | 1.0 | 8239 | 0.8156 |
| 0.6787 | 2.0 | 16478 | 0.8494 |
| 0.4867 | 3.0 | 24717 | 0.9441 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
GioReg/ita1 | 640cf098131d963291227d14dfffed4b750b1191 | 2022-03-30T14:42:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | GioReg | null | GioReg/ita1 | 4 | null | transformers | 19,208 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ita1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ita1
This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5892
- Accuracy: 0.776
- F1: 0.5912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
chnaaam/brokorli_sm | ad718a17ce04fa82870a3615670469a44f832c04 | 2022-03-29T12:53:01.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | chnaaam | null | chnaaam/brokorli_sm | 4 | null | transformers | 19,209 | Entry not found |
milmor/t5-small-spanish-nahuatl | 331abcfa17e3128f76f83e746fdfc4d34f0d4f99 | 2022-04-10T18:20:50.000Z | [
"pytorch",
"t5",
"text2text-generation",
"es",
"nah",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | milmor | null | milmor/t5-small-spanish-nahuatl | 4 | 2 | transformers | 19,210 | ---
license: apache-2.0
language:
- es
- nah
tags:
- translation
widget:
- text: "translate Spanish to Nahuatl: muchas flores son blancas"
---
# t5-small-spanish-nahuatl
## Model description
This model is a T5 Transformer ([t5-small](https://huggingface.co/t5-small)) fine-tuned on 29,007 spanish and nahuatl sentences using 12,890 samples collected from the web and 16,117 samples from the Axolotl dataset.
The dataset is normalized using 'sep' normalization from [py-elotl](https://github.com/ElotlMX/py-elotl).
## Usage
```python
from transformers import AutoModelForSeq2SeqLM
from transformers import AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('milmor/t5-small-spanish-nahuatl')
tokenizer = AutoTokenizer.from_pretrained('milmor/t5-small-spanish-nahuatl')
model.eval()
sentence = 'muchas flores son blancas'
input_ids = tokenizer('translate Spanish to Nahuatl: ' + sentence, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
# outputs = miak xochitl istak
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
```
## Evaluation results
The model is evaluated on 400 validation sentences.
- Validation loss: 1.36
_Note: Since the Axolotl corpus contains multiple misalignments, the real Validation loss is slightly better. These misalignments also introduce noise into the training._
## References
- Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits
of transfer learning with a unified Text-to-Text transformer.
- Ximena Gutierrez-Vasques, Gerardo Sierra, and Hernandez Isaac. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In International Conference on Language Resources and Evaluation (LREC).
> Created by [Emilio Alejandro Morales](https://huggingface.co/milmor). |
princeton-nlp/CoFi-QNLI-s60 | 1dcc74c72e6675b90508176b8c2e2597088d9b18 | 2022-05-01T01:19:53.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"transformers"
] | text-classification | false | princeton-nlp | null | princeton-nlp/CoFi-QNLI-s60 | 4 | null | transformers | 19,211 | This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset QNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
cammiemw/bert-marco-hdct | 504a2cf0a1b7448b157983bc3b556328ea62c71e | 2022-03-30T01:21:38.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:cc-by-nc-4.0"
] | text-classification | false | cammiemw | null | cammiemw/bert-marco-hdct | 4 | null | transformers | 19,212 | ---
license: cc-by-nc-4.0
---
|
Fredvv/distilbert-base-uncased-finetuned-imdb | 72fb222ac7204f6070dd860d1ae23de13ce56702 | 2022-03-30T02:23:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Fredvv | null | Fredvv/distilbert-base-uncased-finetuned-imdb | 4 | null | transformers | 19,213 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6763 | 1.0 | 313 | 2.4484 |
| 2.5402 | 2.0 | 626 | 2.4312 |
| 2.5194 | 3.0 | 939 | 2.3894 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hoangbinhmta99/wav2vec-NCKH-2022 | f4da2b45c8e0e6e5791b5294da6b86ec02601770 | 2022-03-31T00:28:52.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"vi",
"dataset:vivos",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"Transformer",
"license:cc-by-nc-4.0",
"automatic-speech-recognition",
"model-index"
] | automatic-speech-recognition | false | hoangbinhmta99 | null | hoangbinhmta99/wav2vec-NCKH-2022 | 4 | null | transformers | 19,214 | ---
language: vi
datasets:
- vivos
- common_voice
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- audio
- speech
- Transformer
license: cc-by-nc-4.0
model-index:
- name: Wav2vec2 NCKH Vietnamese 2022
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: No
---
Convert from model .pt to transformer
Link: https://huggingface.co/tommy19970714/wav2vec2-base-960h
Bash:
```bash
pip install transformers[sentencepiece]
pip install fairseq -U
git clone https://github.com/huggingface/transformers.git
cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py .
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt -O ./wav2vec_small.pt
mkdir dict
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt
mkdir outputs
python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py
--pytorch_dump_folder_path ./outputs --checkpoint_path ./finetuned/wav2vec_small.pt
--dict_path ./dict/dict.ltr.txt --not_finetuned
```
# install and upload model
```
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
git lfs install
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/hoangbinhmta99/wav2vec-demo
ls
cd wav2vec-demo/
git status
git add .
git commit -m "First model version"
git config --global user.email [yourname]
git config --global user.name [yourpass]
git commit -m "First model version"
git push
```
|
aggtamv/Wav2vec2Askisi | c6b2bb6212061dcefd341b489511e729d90f19ed | 2022-06-14T19:11:41.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | aggtamv | null | aggtamv/Wav2vec2Askisi | 4 | null | transformers | 19,215 | |
IIC/roberta-base-bne-bioasq | 25ae3339b1f941f42c7060cbed029ada6748185f | 2022-04-02T15:04:37.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:IIC/bioasq22_es",
"arxiv:2107.07253",
"transformers",
"model-index",
"autotrain_compatible"
] | question-answering | false | IIC | null | IIC/roberta-base-bne-bioasq | 4 | null | transformers | 19,216 | ---
language:
- es
tags:
- question-answering # Example: audio
datasets:
- IIC/bioasq22_es
metrics:
- f1
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: roberta-base-bne-bioasq
results:
- task:
type: question-answering # Required. Example: automatic-speech-recognition
name: question-answering # Optional. Example: Speech Recognition
dataset:
type: SQAC # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: IIC/bioasq22_es # Required. Example: Common Voice zh-CN
metrics:
- type: f1
value: 18.599535404584742
name: f1
---
This model was trained on the [bioasq22_es](https://huggingface.co/datasets/IIC/bioasq22_es) dataset, provided by [IIC](https://www.iic.uam.es/). It is an automatically translated version of the [bioasq](https://huggingface.co/datasets/kroshan/BioASQ) dataset. As for the model, it is a fine-tuned version of the Spanish version of [MarIA-Roberta](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) trained by [BSC](https://www.bsc.es/).
For training the model, we followed the recommendations of the own authors in [their paper](https://arxiv.org/abs/2107.07253), performing a full grid search over the hyperparameter space provided in the paper, and selected the best model based on eval\_loss.
You can use the model like this:
```python
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
tokenizer = BertTokenizer.from_pretrained("IIC/roberta-base-bne-bioasq")
model = BertForQuestionAnswering.from_pretrained("IIC/roberta-base-bne-bioasq")
question, text = "Quién es el padre de Luke Skywalker?", "En la famosa película, Darth Veider le dice a Luke Skywalker aquella frase que todos recordamos: yo soy tu padre."
inputs = tokenizer(question, text, return_tensors="pt")
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
Cheatham/xlm-roberta-large-finetuned-d1-002 | f33379e29c388c8456ae4ffa9fa4d64ad3c21d60 | 2022-03-30T14:45:51.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d1-002 | 4 | null | transformers | 19,217 | Entry not found |
negfir/distilbert-base-uncased-finetuned-stsb | db4387bbdea321837685620d028bf87be9bbd850 | 2022-03-30T18:33:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | negfir | null | negfir/distilbert-base-uncased-finetuned-stsb | 4 | null | transformers | 19,218 | Entry not found |
Cheatham/xlm-roberta-large-finetuned-d12-002 | 66419009fcf4fc33111f3cdb002fda66faaa1f77 | 2022-03-30T16:00:33.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d12-002 | 4 | null | transformers | 19,219 | Entry not found |
Cheatham/xlm-roberta-large-finetuned-d12-003 | 59a14f84e996e7348605de82a660322441911978 | 2022-03-30T17:47:45.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d12-003 | 4 | null | transformers | 19,220 | Entry not found |
sadia-afrin-purba/fake-news-classifier | 1347cb0273b66324094df3fb04840705cdaadf5c | 2022-04-02T10:10:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | sadia-afrin-purba | null | sadia-afrin-purba/fake-news-classifier | 4 | null | transformers | 19,221 | ---
license: mit
---
|
Cheatham/xlm-roberta-large-finetuned-d12-004 | da06a9b9f646559dc002b612ee935902eb94681a | 2022-03-30T18:44:24.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d12-004 | 4 | null | transformers | 19,222 | Entry not found |
midas/roberta-large-inspec-finetuned-crf | 3d2540e8ac97c01560adf08e46e39c149ef8cc0c | 2022-04-03T01:27:33.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | token-classification | false | midas | null | midas/roberta-large-inspec-finetuned-crf | 4 | null | transformers | 19,223 | ---
license: afl-3.0
---
|
DrishtiSharma/poem-gen-spanish-t5-small-test | 1cadb22102292d184273d694ad34ea88ef241e19 | 2022-03-31T03:53:52.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | DrishtiSharma | null | DrishtiSharma/poem-gen-spanish-t5-small-test | 4 | null | transformers | 19,224 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-test
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.8391 | 0.73 | 30000 | 2.9486 |
| 2.6782 | 1.46 | 60000 | 2.8990 |
| 2.5323 | 2.19 | 90000 | 2.9193 |
| 2.5191 | 2.93 | 120000 | 2.8982 |
| 2.4007 | 3.66 | 150000 | 2.9241 |
| 2.2909 | 4.39 | 180000 | 2.9418 |
| 2.1741 | 5.12 | 210000 | 2.9783 |
| 2.1973 | 5.85 | 240000 | 2.9671 |
| 2.0969 | 6.58 | 270000 | 3.0179 |
| 1.9818 | 7.31 | 300000 | 3.0582 |
| 1.8639 | 8.05 | 330000 | 3.0918 |
| 1.8824 | 8.78 | 360000 | 3.1095 |
| 1.7929 | 9.51 | 390000 | 3.1502 |
| 1.7247 | 10.24 | 420000 | 3.1855 |
| 1.7039 | 10.97 | 450000 | 3.1953 |
| 1.6475 | 11.7 | 480000 | 3.2180 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
TheGoldenToaster/DialoGPT-medium-Woody | c12e5f4cfa008cabdbdf63b21e2b8d413e21e803 | 2022-03-31T19:30:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | TheGoldenToaster | null | TheGoldenToaster/DialoGPT-medium-Woody | 4 | 1 | transformers | 19,225 | ---
tags:
- conversational
---
#Bot Chat |
abdusahmbzuai/aradia-ctc-hubert-ft | 5b7f930ae872d827cd78ba5a0e2083d723d88c54 | 2022-03-31T20:56:27.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"transformers",
"abdusahmbzuai/arabic_speech_massive_300hrs",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | abdusahmbzuai | null | abdusahmbzuai/aradia-ctc-hubert-ft | 4 | null | transformers | 19,226 | ---
tags:
- automatic-speech-recognition
- abdusahmbzuai/arabic_speech_massive_300hrs
- generated_from_trainer
model-index:
- name: aradia-ctc-hubert-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aradia-ctc-hubert-ft
This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8536
- Wer: 0.3737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.43 | 100 | 3.6934 | 1.0 |
| No log | 0.87 | 200 | 3.0763 | 1.0 |
| No log | 1.3 | 300 | 2.9737 | 1.0 |
| No log | 1.74 | 400 | 2.5734 | 1.0 |
| 5.0957 | 2.17 | 500 | 1.1900 | 0.9011 |
| 5.0957 | 2.61 | 600 | 0.9726 | 0.7572 |
| 5.0957 | 3.04 | 700 | 0.8960 | 0.6209 |
| 5.0957 | 3.48 | 800 | 0.7851 | 0.5515 |
| 5.0957 | 3.91 | 900 | 0.7271 | 0.5115 |
| 1.0312 | 4.35 | 1000 | 0.7053 | 0.4955 |
| 1.0312 | 4.78 | 1100 | 0.6823 | 0.4737 |
| 1.0312 | 5.22 | 1200 | 0.6768 | 0.4595 |
| 1.0312 | 5.65 | 1300 | 0.6635 | 0.4488 |
| 1.0312 | 6.09 | 1400 | 0.6602 | 0.4390 |
| 0.6815 | 6.52 | 1500 | 0.6464 | 0.4310 |
| 0.6815 | 6.95 | 1600 | 0.6455 | 0.4394 |
| 0.6815 | 7.39 | 1700 | 0.6630 | 0.4312 |
| 0.6815 | 7.82 | 1800 | 0.6521 | 0.4126 |
| 0.6815 | 8.26 | 1900 | 0.6282 | 0.4284 |
| 0.544 | 8.69 | 2000 | 0.6248 | 0.4178 |
| 0.544 | 9.13 | 2100 | 0.6510 | 0.4104 |
| 0.544 | 9.56 | 2200 | 0.6527 | 0.4013 |
| 0.544 | 10.0 | 2300 | 0.6511 | 0.4064 |
| 0.544 | 10.43 | 2400 | 0.6734 | 0.4061 |
| 0.4478 | 10.87 | 2500 | 0.6756 | 0.4145 |
| 0.4478 | 11.3 | 2600 | 0.6727 | 0.3990 |
| 0.4478 | 11.74 | 2700 | 0.6619 | 0.4007 |
| 0.4478 | 12.17 | 2800 | 0.6614 | 0.4019 |
| 0.4478 | 12.61 | 2900 | 0.6695 | 0.4004 |
| 0.3919 | 13.04 | 3000 | 0.6778 | 0.3966 |
| 0.3919 | 13.48 | 3100 | 0.6872 | 0.3971 |
| 0.3919 | 13.91 | 3200 | 0.6882 | 0.3945 |
| 0.3919 | 14.35 | 3300 | 0.7177 | 0.4010 |
| 0.3919 | 14.78 | 3400 | 0.6888 | 0.4043 |
| 0.3767 | 15.22 | 3500 | 0.7124 | 0.4202 |
| 0.3767 | 15.65 | 3600 | 0.7276 | 0.4120 |
| 0.3767 | 16.09 | 3700 | 0.7265 | 0.4034 |
| 0.3767 | 16.52 | 3800 | 0.7392 | 0.4077 |
| 0.3767 | 16.95 | 3900 | 0.7403 | 0.3965 |
| 0.3603 | 17.39 | 4000 | 0.7445 | 0.4016 |
| 0.3603 | 17.82 | 4100 | 0.7579 | 0.4012 |
| 0.3603 | 18.26 | 4200 | 0.7225 | 0.3963 |
| 0.3603 | 18.69 | 4300 | 0.7355 | 0.3951 |
| 0.3603 | 19.13 | 4400 | 0.7482 | 0.3925 |
| 0.3153 | 19.56 | 4500 | 0.7723 | 0.3972 |
| 0.3153 | 20.0 | 4600 | 0.7469 | 0.3898 |
| 0.3153 | 20.43 | 4700 | 0.7800 | 0.3944 |
| 0.3153 | 20.87 | 4800 | 0.7827 | 0.3897 |
| 0.3153 | 21.3 | 4900 | 0.7935 | 0.3914 |
| 0.286 | 21.74 | 5000 | 0.7984 | 0.3750 |
| 0.286 | 22.17 | 5100 | 0.7945 | 0.3830 |
| 0.286 | 22.61 | 5200 | 0.8011 | 0.3775 |
| 0.286 | 23.04 | 5300 | 0.7978 | 0.3824 |
| 0.286 | 23.48 | 5400 | 0.8161 | 0.3833 |
| 0.2615 | 23.91 | 5500 | 0.7823 | 0.3858 |
| 0.2615 | 24.35 | 5600 | 0.8312 | 0.3863 |
| 0.2615 | 24.78 | 5700 | 0.8427 | 0.3819 |
| 0.2615 | 25.22 | 5800 | 0.8432 | 0.3802 |
| 0.2615 | 25.65 | 5900 | 0.8286 | 0.3794 |
| 0.2408 | 26.09 | 6000 | 0.8224 | 0.3824 |
| 0.2408 | 26.52 | 6100 | 0.8228 | 0.3823 |
| 0.2408 | 26.95 | 6200 | 0.8324 | 0.3795 |
| 0.2408 | 27.39 | 6300 | 0.8564 | 0.3744 |
| 0.2408 | 27.82 | 6400 | 0.8629 | 0.3774 |
| 0.2254 | 28.26 | 6500 | 0.8545 | 0.3778 |
| 0.2254 | 28.69 | 6600 | 0.8492 | 0.3767 |
| 0.2254 | 29.13 | 6700 | 0.8511 | 0.3751 |
| 0.2254 | 29.56 | 6800 | 0.8491 | 0.3753 |
| 0.2254 | 30.0 | 6900 | 0.8536 | 0.3737 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
gdwangh/distilbert-base-uncased-finetuned-cola | 9f5f5d7fe2e41777e57221bd66b74095be45cc33 | 2022-04-09T10:39:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | gdwangh | null | gdwangh/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 19,227 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5197669430092784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6532
- Matthews Correlation: 0.5198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5228 | 1.0 | 535 | 0.5270 | 0.4212 |
| 0.3448 | 2.0 | 1070 | 0.5360 | 0.5073 |
| 0.2305 | 3.0 | 1605 | 0.6532 | 0.5198 |
| 0.1691 | 4.0 | 2140 | 0.7934 | 0.5171 |
| 0.128 | 5.0 | 2675 | 0.8732 | 0.5166 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
xxr/bert-base-uncased-multi-128 | 786be7bd7d38087ac619e87d0ec793ff0c6fb6e1 | 2022-04-01T11:40:30.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | xxr | null | xxr/bert-base-uncased-multi-128 | 4 | null | transformers | 19,228 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bert-base-uncased-multi-128
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-multi-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.6636 | 1.0 | 812 | 3.2325 |
| 3.2963 | 2.0 | 1624 | 3.1937 |
| 3.1132 | 3.0 | 2436 | 3.2984 |
| 2.9386 | 4.0 | 3248 | 3.2430 |
| 2.7742 | 5.0 | 4060 | 3.1272 |
| 2.5954 | 6.0 | 4872 | 3.1778 |
| 2.501 | 7.0 | 5684 | 3.1649 |
| 2.4073 | 8.0 | 6496 | 2.9395 |
| 2.2933 | 9.0 | 7308 | 3.1262 |
| 2.2218 | 10.0 | 8120 | 2.9994 |
| 2.1558 | 11.0 | 8932 | 2.9922 |
| 2.0873 | 12.0 | 9744 | 2.8414 |
| 2.0104 | 13.0 | 10556 | 2.9351 |
| 1.9364 | 14.0 | 11368 | 2.9253 |
| 1.9045 | 15.0 | 12180 | 2.8701 |
| 1.9152 | 16.0 | 12992 | 2.7101 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.7.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
vicl/canine-c-finetuned-mrpc | 27ed4f69d0376c1404e25083ec50fa5e9b79138f | 2022-04-01T16:33:28.000Z | [
"pytorch",
"tensorboard",
"canine",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | vicl | null | vicl/canine-c-finetuned-mrpc | 4 | null | transformers | 19,229 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: canine-c-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8627450980392157
- name: F1
type: f1
value: 0.9014084507042254
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-c-finetuned-mrpc
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4066
- Accuracy: 0.8627
- F1: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.5014 | 0.7696 | 0.8479 |
| No log | 2.0 | 460 | 0.4755 | 0.7892 | 0.8622 |
| 0.5096 | 3.0 | 690 | 0.3645 | 0.8431 | 0.8869 |
| 0.5096 | 4.0 | 920 | 0.4066 | 0.8627 | 0.9014 |
| 0.2619 | 5.0 | 1150 | 0.4551 | 0.8431 | 0.8877 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bitsanlp/distilbert-base-uncased-distilbert-fakenews-detection | 7475546b92527e434eb83f7dbd8fa170a49b2272 | 2022-04-01T17:17:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | bitsanlp | null | bitsanlp/distilbert-base-uncased-distilbert-fakenews-detection | 4 | null | transformers | 19,230 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-distilbert-fakenews-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilbert-fakenews-detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.0125 | 1.0 | 978 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 2.0 | 1956 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 3.0 | 2934 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
erikacardenas300/StartupClassifier | 1c9f9994b86a272d7b882cfebb967bce8ba5bd22 | 2022-04-05T15:23:05.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Crunchbase",
"transformers"
] | text-classification | false | erikacardenas300 | null | erikacardenas300/StartupClassifier | 4 | 2 | transformers | 19,231 | ---
language: en
datasets:
- Crunchbase
---
# Company Classifier
This fine-tuned Distilbert model is using company descriptions for classification. The model is tasked to classify the company as either finance or biotech. The demo can be found on my profile under Spaces (https://huggingface.co/erikacardenas300).
I hope you enjoy it! |
wuyue1987/distilbert-base-uncased-suggestion-finetuned | 069befec3f126643be31ea55ceed7fd05eab3fa5 | 2022-04-02T00:27:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | wuyue1987 | null | wuyue1987/distilbert-base-uncased-suggestion-finetuned | 4 | 1 | transformers | 19,232 | Entry not found |
DMetaSoul/sbert-chinese-general-v1-distill | 7f20ecf1db8f800b6a1592e6e9de8b5b78d598ab | 2022-04-02T09:39:47.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese"
] | sentence-similarity | false | DMetaSoul | null | DMetaSoul/sbert-chinese-general-v1-distill | 4 | null | sentence-transformers | 19,233 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-general-v1-distill
此模型是之前[开源通用语义匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-general-v1)的蒸馏版本(仅4层 BERT),适用于**通用语义匹配**场景(此模型在 Chinese-STS 任务上效果较好,但在其它任务上效果并非最优,存在一定过拟合风险),比如文本特征抽取、文本向量聚类、文本语义搜索等业务场景。
离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 3% 左右(具体结果详见下文评估小节)。
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v1-distill')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v1-distill')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v1-distill')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
这里主要跟蒸馏前对应的 teacher 模型作了对比
*性能:*
| | Teacher | Student | Gap |
| ---------- | --------------------- | ------------------- | ----- |
| Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x |
| Cost | 23s | 12s | -47% |
| Latency | 37ms | 20ms | -46% |
| Throughput | 422 sentence/s | 788 sentence/s | 1.8x |
*精度:*
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** |
| -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- |
| **Teacher** | 84.54% | 82.17% | 23.80% | 65.94% | 45.52% | 11.52% | 48.51% | 51.71% |
| **Student** | 83.39% | 79.96% | 20.25% | 63.39% | 43.70% | 7.54% | 46.91% | 49.28% |
| **Gap** (abs.) | - | - | - | - | - | - | - | -2.43% |
*基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256*
## Citing & Authors
E-mail: [email protected] |
DMetaSoul/sbert-chinese-qmc-domain-v1-distill | 128ae92441c50c879ee0c2170204cc5aa2971129 | 2022-04-02T10:03:06.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese"
] | sentence-similarity | false | DMetaSoul | null | DMetaSoul/sbert-chinese-qmc-domain-v1-distill | 4 | null | sentence-transformers | 19,234 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-qmc-domain-v1
此模型是基于之前开源[问题匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-qmc-domain-v1)的蒸馏轻量化版本(仅含4层 BERT),适用于**开放领域的问题匹配**场景,比如:
- 洗澡用什么香皂好?vs. 洗澡用什么香皂好
- 大连哪里拍婚纱照好点? vs. 大连哪里拍婚纱照比较好
- 银行卡怎样挂失?vs. 银行卡丢了怎么挂失啊?
离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 4% 左右(具体结果详见下文评估小节)。
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-qmc-domain-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-qmc-domain-v1')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-qmc-domain-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
这里主要跟蒸馏前对应的 teacher 模型作了对比
*性能:*
| | Teacher | Student | Gap |
| ---------- | --------------------- | ------------------- | ----- |
| Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x |
| Cost | 23s | 12s | -47% |
| Latency | 38ms | 20ms | -47% |
| Throughput | 421 sentence/s | 791 sentence/s | 1.9x |
*精度:*
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** |
| -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- |
| **Teacher** | 80.90% | 76.62% | 34.51% | 77.05% | 52.95% | 12.97% | 59.47% | 56.35% |
| **Student** | 79.89% | 76.34% | 27.59% | 69.26% | 49.40% | 9.06% | 53.52% | 52.15% |
| **Gap** (abs.) | - | - | - | - | - | - | - | -4.2% |
*基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256*
## Citing & Authors
E-mail: [email protected] |
unjustify/autotrain-commonsense_1-696121179 | 59b9c87e82c4eabbd968e73368b6db3018193be4 | 2022-04-02T13:49:28.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:unjustify/autotrain-data-commonsense_1",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | unjustify | null | unjustify/autotrain-commonsense_1-696121179 | 4 | null | transformers | 19,235 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- unjustify/autotrain-data-commonsense_1
co2_eq_emissions: 4.355285184457145
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 696121179
- CO2 Emissions (in grams): 4.355285184457145
## Validation Metrics
- Loss: 0.34467628598213196
- Accuracy: 0.8544333807491702
- Precision: 0.9014251781472684
- Recall: 0.7721261444557477
- AUC: 0.9422766967397805
- F1: 0.8317808219178082
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/unjustify/autotrain-commonsense_1-696121179
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("unjustify/autotrain-commonsense_1-696121179", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("unjustify/autotrain-commonsense_1-696121179", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
reichenbach/fake-news-detector-v2 | f714e9a5f03987d8f536770e48e53193f748a812 | 2022-04-03T13:42:32.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | reichenbach | null | reichenbach/fake-news-detector-v2 | 4 | null | transformers | 19,236 | Entry not found |
Finnish-NLP/t5-small-nl24-finnish | 0f719aff08a49a15642f9c82d9acc19b2a2d98e9 | 2022-07-12T13:19:42.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2002.05202",
"arxiv:2109.10686",
"transformers",
"finnish",
"t5x",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | Finnish-NLP | null | Finnish-NLP/t5-small-nl24-finnish | 4 | null | transformers | 19,237 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# T5-small-nl24 for Finnish
Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-small-nl24](https://huggingface.co/google/t5-efficient-small-nl24) architecture's layer depth which means both the encoder and the decoder have 24 transformer layers compared to the original T5 "small" model's architecture of 6 transformer layers.
In total, this model has 260 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-small-nl24-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-small-nl24-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-small-nl24-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-small-nl24-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 256 (in total 66B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
When fine-tuned on those datasets, this model (the third row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |TBA |TBA |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
arnaudstiegler/long-layoutlm | 746eef181eb61924ed202323dc8b17135bfc1946 | 2022-04-07T20:22:11.000Z | [
"pytorch",
"layoutlm",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | arnaudstiegler | null | arnaudstiegler/long-layoutlm | 4 | null | transformers | 19,238 | Entry not found |
bioformers/bioformer-litcovid | 0434d1edd48e6ae4a6727d778b25496270d6c52d | 2022-04-06T18:58:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | bioformers | null | bioformers/bioformer-litcovid | 4 | null | transformers | 19,239 | [`bioformers/bioformer-cased-v1.0`](https://huggingface.co/bioformers/bioformer-cased-v1.0) pretrained on 164,179 COVID-19 abstracts (from [LitCovid website](https://www.ncbi.nlm.nih.gov/research/coronavirus/)) for 100 epochs.
In our evaluation, this pretraining process leads to improved performance on the multi-label COVID-19 topic classification task (BioCreative VII track 5). |
emon1521/wav2vec2-base-timit-demo-colab | aa6215900b1937298b498c60f0a8f33bce9ae0a1 | 2022-04-05T07:32:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | emon1521 | null | emon1521/wav2vec2-base-timit-demo-colab | 4 | null | transformers | 19,240 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Cheatham/xlm-roberta-large-finetuned-d12-005 | 7b0e5ac23fd219bf3abc4b9c99974f903b61d476 | 2022-04-04T08:24:48.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d12-005 | 4 | null | transformers | 19,241 | Entry not found |
pinku/FatimaFellowship_fake_and_real_news | ea86827d7f21029715d478ad39462995bf62da09 | 2022-04-05T03:22:45.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:bsd-3-clause"
] | text-classification | false | pinku | null | pinku/FatimaFellowship_fake_and_real_news | 4 | null | transformers | 19,242 | ---
license: bsd-3-clause
---
# Fatima Fellowship NLP Project
## Fake News Classifier
- BERT base model finetuned to classify fake news. |
Chhavnish/distilbert-base-uncased-finetuned-cola | a5744fdd1e8b79d492d848131f8f72f801eb752e | 2022-04-05T06:29:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Chhavnish | null | Chhavnish/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 19,243 | Entry not found |
LIA-AvignonUniversity/IWSLT2022-Niger-Mali | c2f78bdee8e7a1ba4852914b1221d44ed4bc3200 | 2022-05-11T09:31:51.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"arxiv:2201.05051",
"transformers"
] | null | false | LIA-AvignonUniversity | null | LIA-AvignonUniversity/IWSLT2022-Niger-Mali | 4 | null | transformers | 19,244 | ## Model and data descriptions
This is a wav2vec 2.0 base model trained on the Niger-Mali audio collection and on the Tamasheq-French speech corpus. These combined contained 111 hours of French, 109 hours of Fulfulde, 100 hours of Hausa, 243 hours of Tamasheq and 95 hours of Zarma.
These corpora were presented in [Boito et al., 2022](https://arxiv.org/abs/2201.05051).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations.
## Referencing our IWSLT models
```
@article{boito2022trac,
title={ON-TRAC Consortium Systems for the IWSLT 2022 Dialect and Low-resource Speech Translation Tasks},
author={Boito, Marcely Zanon and Ortega, John and Riguidel, Hugo and Laurent, Antoine and Barrault, Lo{\"\i}c and Bougares, Fethi and Chaabani, Firas and Nguyen, Ha and Barbier, Florentin and Gahbiche, Souhir and others},
journal={IWSLT},
year={2022}
}
``` |
aswinsson/fake_new_classifier | 47c3850860b38ac2fa69329744792038f017b8bf | 2022-04-04T18:50:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | aswinsson | null | aswinsson/fake_new_classifier | 4 | null | transformers | 19,245 | ---
license: afl-3.0
---
The fake news classifer built using distillbert uncased. Created for the Fatima Fellowship coding challenge and trained on P100 instance for 3 epochs. The model is a binary classifier which predicts 1 in case of real news.
Library: transformers \
Language: English \
Dataset: https:\/\/www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset |
dapang/distilbert-base-uncased-finetuned-moral-ctx-action-conseq | 4cf7a0bdabbf2b3e51699b8f5b7bae83483dd417 | 2022-04-05T02:48:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dapang | null | dapang/distilbert-base-uncased-finetuned-moral-ctx-action-conseq | 4 | null | transformers | 19,246 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-moral-ctx-action-conseq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-moral-ctx-action-conseq
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1111
- Accuracy: 0.9676
- F1: 0.9676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.989502318502869e-05
- train_batch_size: 2000
- eval_batch_size: 2000
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 10 | 0.1569 | 0.9472 | 0.9472 |
| No log | 2.0 | 20 | 0.1171 | 0.9636 | 0.9636 |
| No log | 3.0 | 30 | 0.1164 | 0.9664 | 0.9664 |
| No log | 4.0 | 40 | 0.1117 | 0.9672 | 0.9672 |
| No log | 5.0 | 50 | 0.1111 | 0.9676 | 0.9676 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.11.0
|
emon1521/wav2vec2-base | 888039e1f0c3f6edca5a1ed3fd897c794a822cf8 | 2022-04-06T11:39:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | emon1521 | null | emon1521/wav2vec2-base | 4 | null | transformers | 19,247 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0808
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 3.7118 | 0.5 | 500 | 3.0635 | 1.0 |
| 2.9533 | 1.01 | 1000 | 3.0383 | 1.0 |
| 2.9493 | 1.51 | 1500 | 3.0638 | 1.0 |
| 2.9495 | 2.01 | 2000 | 3.0554 | 1.0 |
| 2.9468 | 2.51 | 2500 | 3.0630 | 1.0 |
| 2.9493 | 3.02 | 3000 | 3.0530 | 1.0 |
| 2.9457 | 3.52 | 3500 | 3.0534 | 1.0 |
| 2.9492 | 4.02 | 4000 | 3.0357 | 1.0 |
| 2.9444 | 4.52 | 4500 | 3.0366 | 1.0 |
| 2.9495 | 5.03 | 5000 | 3.0412 | 1.0 |
| 2.9468 | 5.53 | 5500 | 3.0331 | 1.0 |
| 2.9453 | 6.03 | 6000 | 3.0847 | 1.0 |
| 2.9484 | 6.53 | 6500 | 3.0661 | 1.0 |
| 2.9457 | 7.04 | 7000 | 3.0769 | 1.0 |
| 2.9449 | 7.54 | 7500 | 3.0701 | 1.0 |
| 2.9453 | 8.04 | 8000 | 3.1072 | 1.0 |
| 2.9436 | 8.54 | 8500 | 3.1043 | 1.0 |
| 2.9474 | 9.05 | 9000 | 3.0902 | 1.0 |
| 2.9452 | 9.55 | 9500 | 3.0879 | 1.0 |
| 2.9443 | 10.05 | 10000 | 3.1112 | 1.0 |
| 2.9436 | 10.55 | 10500 | 3.0946 | 1.0 |
| 2.9469 | 11.06 | 11000 | 3.0812 | 1.0 |
| 2.9434 | 11.56 | 11500 | 3.1112 | 1.0 |
| 2.9442 | 12.06 | 12000 | 3.0855 | 1.0 |
| 2.9436 | 12.56 | 12500 | 3.0786 | 1.0 |
| 2.9425 | 13.07 | 13000 | 3.0789 | 1.0 |
| 2.9418 | 13.57 | 13500 | 3.0786 | 1.0 |
| 2.9443 | 14.07 | 14000 | 3.0798 | 1.0 |
| 2.9449 | 14.57 | 14500 | 3.0808 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.10.3
|
cwkeam/mctc-large | ce4eb45713b5f6873dcbb0e7bf35902c440cccb4 | 2022-05-02T18:00:33.000Z | [
"pytorch",
"mctc",
"transformers"
] | null | false | cwkeam | null | cwkeam/mctc-large | 4 | null | transformers | 19,248 | |
GioReg/AlbertoBertnews | cae8dea366b9b7fd26673557735d738843b3ceb5 | 2022-04-06T12:22:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | GioReg | null | GioReg/AlbertoBertnews | 4 | null | transformers | 19,249 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: AlbertoBertnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AlbertoBertnews
This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1382
- Accuracy: 0.9640
- F1: 0.9635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Stremie/bert-base-uncased-clickbait | e4ad5c78d1eeedc1262652c1012ae623cecc38df | 2022-04-18T12:52:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Stremie | null | Stremie/bert-base-uncased-clickbait | 4 | null | transformers | 19,250 | This model classifies whether a tweet is clickbait or not. It has been trained using [Webis-Clickbait-17](https://webis.de/data/webis-clickbait-17.html) dataset. Input is composed of 'postText'. Achieved ~0.7 F1-score on test data. |
GioReg/AlbertoBertrecensioni | 8374a776b5792d0e9caa07352e31fcfec0a9ab25 | 2022-04-06T14:02:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | GioReg | null | GioReg/AlbertoBertrecensioni | 4 | null | transformers | 19,251 | ---
tags:
- generated_from_trainer
model-index:
- name: AlbertoBertrecensioni
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AlbertoBertrecensioni
This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
skosgi242/bert-large-uncased-whole-word-masking-lyrics | 66c1a6f94bbbceba96808034c93563f9bb5b8b41 | 2022-05-07T12:28:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | skosgi242 | null | skosgi242/bert-large-uncased-whole-word-masking-lyrics | 4 | null | transformers | 19,252 | Entry not found |
LACAI/roberta-large-PFG-donation-detection | 91799e5e470e7c033b7442785d1d8cb451910ac1 | 2022-04-18T02:19:50.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | LACAI | null | LACAI/roberta-large-PFG-donation-detection | 4 | null | transformers | 19,253 | ---
license: mit
---
Base model: [roberta-large](https://huggingface.co/roberta-large)
Fine tuned for persuadee donation detection on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019):
Given a complete dialogue from Persuasion For Good, the task is to predict the binary label:
- 0: the persuadee does not intend to donate
- 1: the persuadee intends to donate
Only persuadee utterances are input to the model for this task - persuader utterances are discarded. Each training example is the concatenation of all persuadee utterances in a single dialogue, each separated by the `</s>` token.
For example:
**Input**: `<s>How are you?</s>Can you tell me more about the charity?</s>...</s>Sure, I'll donate a dollar.</s>...</s>`
**Label**: 1
**Input**: `<s>How are you?</s>Can you tell me more about the charity?</s>...</s>I am not interested.</s>...</s>`
**Label**: 0
The following Dialogues were excluded:
- 146 dialogues where a donation of 0 was made at the end of the task but a non-zero amount was pledged by the persuadee in the dialogue, per the following regular expression: `(?:\$(?:0\.)?[1-9]|[1-9][.0-9]*?(?: ?\$| dollars?| cents?))`
Data Info:
- **Training set**: 587 dialogues, using actual end-task donations as labels
- **Validation set**: 141 dialogues, using manual donation intention labels from Persuasion For Good 'AnnSet'
- **Test set**: 143 dialogues, using manual donation intention labels from Persuasion For Good 'AnnSet'
Training Info:
- **Loss**: CrossEntropy with class weights: 1.5447 (class 0) and 0.7393 (class 1). These weights were derived from the training split.
- **Early Stopping**: The checkpoint with the highest validation macro f1 was selected. This occurred at step 35 (see training metrics for more detail).
Testing Info:
- **Test Macro F1**: 0.893
- **Test Accuracy**: 0.902 |
Stremie/xlm-roberta-base-clickbait | 601b119e96a4761ded895598c570f40fadfb9924 | 2022-04-18T12:51:20.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Stremie | null | Stremie/xlm-roberta-base-clickbait | 4 | null | transformers | 19,254 | This model classifies whether a tweet is clickbait or not. It has been trained using [Webis-Clickbait-17](https://webis.de/data/webis-clickbait-17.html) dataset. Input is composed of 'postText'. Achieved ~0.7 F1-score on test data. |
Danni/distilbert-base-uncased-finetuned-cola | 3d59da0f1e2c74e099bbd1163827a8558b61e5ee | 2022-04-13T07:28:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Danni | null | Danni/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 19,255 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.44113488112476795
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4994
- Matthews Correlation: 0.4411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5282 | 1.0 | 535 | 0.4994 | 0.4411 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pitspits/xlm-roberta-base-finetuned-panx-de | 11a567213b9fec5b37cbb42a68b920a61ab59524 | 2022-04-13T14:12:42.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | pitspits | null | pitspits/xlm-roberta-base-finetuned-panx-de | 4 | null | transformers | 19,256 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8651268890789849
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1398
- F1: 0.8651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2615 | 1.0 | 525 | 0.1515 | 0.8253 |
| 0.1285 | 2.0 | 1050 | 0.1423 | 0.8490 |
| 0.0803 | 3.0 | 1575 | 0.1398 | 0.8651 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Bistolero/du_ge_all_cut | 9035eb4b36464ea4601bf5e09bc18ab99159c4c8 | 2022-04-07T01:30:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/du_ge_all_cut | 4 | null | transformers | 19,257 | Entry not found |
shubh024/autotrain-intentclassificationfilipino-715021714 | f402226d832f43e054a4efbc43366c205002f407 | 2022-04-07T07:38:46.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:shubh024/autotrain-data-intentclassificationfilipino",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | shubh024 | null | shubh024/autotrain-intentclassificationfilipino-715021714 | 4 | null | transformers | 19,258 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- shubh024/autotrain-data-intentclassificationfilipino
co2_eq_emissions: 0.003341516495672918
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 715021714
- CO2 Emissions (in grams): 0.003341516495672918
## Validation Metrics
- Loss: 0.5571377873420715
- Accuracy: 0.8
- Macro F1: 0.6709090909090909
- Micro F1: 0.8000000000000002
- Weighted F1: 0.7739393939393939
- Macro Precision: 0.7
- Micro Precision: 0.8
- Weighted Precision: 0.8
- Macro Recall: 0.7
- Micro Recall: 0.8
- Weighted Recall: 0.8
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/shubh024/autotrain-intentclassificationfilipino-715021714
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("shubh024/autotrain-intentclassificationfilipino-715021714", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("shubh024/autotrain-intentclassificationfilipino-715021714", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
danjohnvelasco/roberta-tagalog-base-cohfie-v1 | 0ee58ced248b5c461946444980d47efb62526b4d | 2022-04-09T09:27:08.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"tl",
"arxiv:2204.03251",
"transformers",
"tagalog",
"filipino",
"license:cc-by-sa-4.0"
] | feature-extraction | false | danjohnvelasco | null | danjohnvelasco/roberta-tagalog-base-cohfie-v1 | 4 | null | transformers | 19,259 | ---
language: tl
tags:
- roberta
- tagalog
- filipino
license: cc-by-sa-4.0
inference: false
---
# RoBERTa Tagalog Base (finetuned on COHFIE)
We finetuned [RoBERTa Tagalog Base](https://huggingface.co/jcblaise/roberta-tagalog-base) on a subset of the Corpus of Historical Filipino and Philippine English (COHFIE) which contains pure filipino and code-switching (i.e. tagalog-english) sentences. All model details, training setups, and corpus details can be found in this paper: [Automatic WordNet Construction using Word Sense Induction through Sentence Embeddings](https://arxiv.org/abs/2204.03251).
## Training Data
Sorry, the corpus is not publicly available yet. Stay tuned!
## Intended uses & limitations
The intended use of this model is to adapt the pre-trained model to our target corpus, COHFIE, with the intention of clustering sentences. You can finetune this model on downstream NLP tasks in Filipino. This model may not be safe for use in production since we did not examine it for biases. Please use it with caution.
## How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("danjohnvelasco/roberta-tagalog-base-cohfie-v1")
model = AutoModel.from_pretrained("danjohnvelasco/roberta-tagalog-base-cohfie-v1")
text = "Replace me with any text in Filipino."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## BibTeX entry and citation info
If you use this model, please cite our work:
```
@misc{https://doi.org/10.48550/arxiv.2204.03251,
doi = {10.48550/ARXIV.2204.03251},
url = {https://arxiv.org/abs/2204.03251},
author = {Velasco, Dan John and Alba, Axel and Pelagio, Trisha Gail and Ramirez, Bryce Anthony and Cruz, Jan Christian Blaise and Cheng, Charibeth},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Automatic WordNet Construction using Word Sense Induction through Sentence Embeddings},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
Shadman-Rohan/results | 2e854ecae191424537f727365d5931c912f41b99 | 2022-04-08T07:33:26.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Shadman-Rohan | null | Shadman-Rohan/results | 4 | null | transformers | 19,260 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 0.8923
- F1: 0.9167
- Precision: 0.8462
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0026 | 1.0 | 1956 | 0.0003 | 0.9552 | 0.9636 | 0.9298 | 1.0 |
| 0.0015 | 2.0 | 3912 | 0.0003 | 0.6688 | 0.7815 | 0.6416 | 0.9996 |
| 0.0011 | 3.0 | 5868 | 0.0002 | 0.8923 | 0.9167 | 0.8462 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Cheatham/xlm-roberta-large-finetuned-d12-006 | 002a165396647160372b8613aa0f68cd4d932962 | 2022-04-08T12:53:16.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d12-006 | 4 | null | transformers | 19,261 | Entry not found |
philschmid/MiniLMv2-L6-H768-sst2 | cb11bf2966e73b91f9a5428fc6e1efaa3f5bf97a | 2022-04-08T13:57:20.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | philschmid | null | philschmid/MiniLMv2-L6-H768-sst2 | 4 | null | transformers | 19,262 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: MiniLMv2-L6-H768-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9426605504587156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H768-sst2
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H768-distilled-from-RoBERTa-Large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2013
- Accuracy: 0.9427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4734 | 1.0 | 264 | 0.2046 | 0.9243 |
| 0.2399 | 2.0 | 528 | 0.1912 | 0.9346 |
| 0.1791 | 3.0 | 792 | 0.1943 | 0.9335 |
| 0.1442 | 4.0 | 1056 | 0.2103 | 0.9369 |
| 0.1217 | 5.0 | 1320 | 0.2013 | 0.9427 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Shadman-Rohan/FakevsRealNews | 70c747ccd764032a3e66f293b2a21089f8924fed | 2022-05-05T06:37:28.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Shadman-Rohan | null | Shadman-Rohan/FakevsRealNews | 4 | null | transformers | 19,263 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: FakevsRealNews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Coding challenge
The challenge involved building a fake news classifier using the huggingface library.
This final model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an fake-and-real-news dataset. The link to the dataset is https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
## Model description
Finetuned Distilbert
## Training and evaluation data
The training data was split into train-dev-test in the ratio 80-10-10.
## Training procedure
The title and text of each news story was concatenated to form each datapoint. Then a model was finetuned to perform single label classification on each datapoint. The final prediction is the class with the highest probability.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0503 | 1.0 | 1956 | 0.0025 | 0.9995 | 0.9995 | 0.9995 | 0.9995 |
| 0.001 | 2.0 | 3912 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 3.0 | 5868 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
damlab/GO-language | 06255cf62962aae5d55b19982540a212783ef58b | 2022-04-09T14:28:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:damlab/uniprot",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | damlab | null | damlab/GO-language | 4 | null | transformers | 19,264 | ---
license: mit
datasets:
- damlab/uniprot
metrics:
- accuracy
widget:
- text: 'involved_in GO:0006468 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'
example_title: 'Function'
---
# GO-Language model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
This model was built as a way to encode the Gene Ontology definition of a protein as vector representation.
It was trained on a collection of gene-ontology terms from model organisms.
Each function was sorted by the ID number and combined with its annotation description ie (`is_a`, `enables`, `located_in`, etc).
The model is tokenized such that each description and GO term is its own token.
This is intended to be used as a translation model between PROT-BERT and GO-Language.
That type of translation model will be useful for predicting the function of novel genes.
## Model Description
This model was trained using the damlab/uniprot dataset on the `go` field with 256 token chunks and a 15% mask rate.
## Intended Uses & Limitations
This model is a useful encapsulation of gene ontology functions.
It allows both an exploration of gene-level similarities as well as comparisons between functional terms.
## How to use
As this is a BERT-style Masked Language learner, it can be used to determine the most likely token a masked position.
```python
from transformers import pipeline
unmasker = pipeline("fill-mask", model="damlab/GO-language")
unmasker("involved_in [MASK] involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372")
[{'score': 0.1040298342704773,
'token': 103,
'token_str': 'GO:0002250',
'sequence': 'involved_in GO:0002250 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.018045395612716675,
'token': 21,
'token_str': 'GO:0005576',
'sequence': 'involved_in GO:0005576 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.015035462565720081,
'token': 50,
'token_str': 'GO:0000139',
'sequence': 'involved_in GO:0000139 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.01181247178465128,
'token': 37,
'token_str': 'GO:0007165',
'sequence': 'involved_in GO:0007165 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'},
{'score': 0.01000668853521347,
'token': 14,
'token_str': 'GO:0005737',
'sequence': 'involved_in GO:0005737 involved_in GO:0007165 located_in GO:0042470 involved_in GO:0070372'}
]
```
## Training Data
The dataset was trained using [damlab/uniprot](https://huggingface.co/datasets/damlab/uniprot) from a random initial model.
The Gene Ontology functions were sorted (by ID number) along with annotating term.
## Training Procedure
### Preprocessing
All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
Training was performed with the HuggingFace training module using the MaskedLM data loader with a 15% masking rate. The learning rate was set at E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset.
## BibTeX Entry and Citation Info
[More Information Needed]
|
akanksha-b14/songs_transcription_wav2vec_base2 | 994b64ae25bfa85f98195315a081aa83b24cd2d3 | 2022-04-09T19:24:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | akanksha-b14 | null | akanksha-b14/songs_transcription_wav2vec_base2 | 4 | null | transformers | 19,265 | Entry not found |
linyi/chirowm-large | cdfb145de4087b8cfd5443faaf4357b336b4abd4 | 2022-04-11T03:57:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | linyi | null | linyi/chirowm-large | 4 | null | transformers | 19,266 | Entry not found |
brad1141/baseline_gptv1 | d3ccbcb43c90a5993caba29a2c03ff28284df2c6 | 2022-04-10T13:25:55.000Z | [
"pytorch",
"gpt2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | brad1141 | null | brad1141/baseline_gptv1 | 4 | null | transformers | 19,267 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: baseline_gptv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline_gptv1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anton-l/xtreme_s_xlsr_300m_fleurs_asr | ca5adeecf32fa6f101ebc6c782fb2d635a917cdf | 2022-04-14T14:49:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/xtreme_s_xlsr_300m_fleurs_asr | 4 | null | transformers | 19,268 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xtreme_s_xlsr_300m_fleurs_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_fleurs_asr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Cer: 0.3330
- Loss: 1.2864
- Wer: 0.8344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:------:|:---------------:|:------:|
| 4.677 | 0.13 | 1000 | 1.0 | 3.2323 | 1.0 |
| 4.1512 | 0.26 | 2000 | 0.5098 | 1.7858 | 0.9869 |
| 1.119 | 0.39 | 3000 | 0.4412 | 1.6628 | 0.9063 |
| 0.8573 | 0.52 | 4000 | 0.3588 | 1.3440 | 0.9016 |
| 1.0232 | 0.65 | 5000 | 0.3690 | 1.3004 | 0.8775 |
| 0.6328 | 0.78 | 6000 | 0.3354 | 1.2219 | 0.8331 |
| 0.6636 | 0.91 | 7000 | 0.3604 | 1.2839 | 0.8637 |
| 0.6536 | 1.04 | 8000 | 0.3420 | 1.2481 | 0.8504 |
| 0.5002 | 1.17 | 9000 | 0.3518 | 1.2514 | 0.8403 |
| 0.4785 | 1.3 | 10000 | 0.3399 | 1.2409 | 0.8570 |
| 0.517 | 1.43 | 11000 | 0.3599 | 1.3058 | 0.8654 |
| 0.506 | 1.56 | 12000 | 0.3484 | 1.2350 | 0.8441 |
| 0.4013 | 1.69 | 13000 | 0.3327 | 1.1982 | 0.8246 |
| 0.3521 | 1.82 | 14000 | 0.3270 | 1.1653 | 0.8265 |
| 0.4265 | 1.95 | 15000 | 0.3562 | 1.2647 | 0.8564 |
| 0.3949 | 2.08 | 16000 | 0.3490 | 1.2988 | 0.8480 |
| 0.3059 | 2.21 | 17000 | 0.3327 | 1.2332 | 0.8323 |
| 0.3618 | 2.34 | 18000 | 0.3480 | 1.2394 | 0.8517 |
| 0.2567 | 2.47 | 19000 | 0.3365 | 1.2294 | 0.8394 |
| 0.3501 | 2.6 | 20000 | 0.3271 | 1.1853 | 0.8250 |
| 0.2766 | 2.73 | 21000 | 0.3425 | 1.2339 | 0.8443 |
| 0.3396 | 2.86 | 22000 | 0.3501 | 1.2768 | 0.8669 |
| 0.3566 | 2.99 | 23000 | 0.3477 | 1.2648 | 0.8710 |
| 0.3166 | 3.12 | 24000 | 0.3550 | 1.3773 | 0.8641 |
| 0.2388 | 3.25 | 25000 | 0.3301 | 1.2374 | 0.8316 |
| 0.2057 | 3.38 | 26000 | 0.3429 | 1.2846 | 0.8560 |
| 0.2264 | 3.51 | 27000 | 0.3469 | 1.2676 | 0.8542 |
| 0.1998 | 3.64 | 28000 | 0.3531 | 1.3365 | 0.8655 |
| 0.2701 | 3.77 | 29000 | 0.3518 | 1.3124 | 0.8711 |
| 0.18 | 3.9 | 30000 | 0.3498 | 1.3095 | 0.8648 |
| 0.1337 | 4.03 | 31000 | 0.3397 | 1.2941 | 0.8452 |
| 0.162 | 4.16 | 32000 | 0.3320 | 1.2942 | 0.8295 |
| 0.2776 | 4.29 | 33000 | 0.3275 | 1.2690 | 0.8276 |
| 0.1634 | 4.42 | 34000 | 0.3307 | 1.3145 | 0.8331 |
| 0.2172 | 4.54 | 35000 | 0.3334 | 1.3031 | 0.8435 |
| 0.1305 | 4.67 | 36000 | 0.3303 | 1.2768 | 0.8321 |
| 0.1436 | 4.8 | 37000 | 0.3353 | 1.2968 | 0.8416 |
| 0.134 | 4.93 | 38000 | 0.3330 | 1.2864 | 0.8344 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
danhsf/xlm-roberta-base-finetuned-panx-de | 8b066a9306b2c645184e7665214910c6379e2fd0 | 2022-04-10T17:53:31.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | danhsf | null | danhsf/xlm-roberta-base-finetuned-panx-de | 4 | null | transformers | 19,269 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8590909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1380
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2642 | 1.0 | 525 | 0.1624 | 0.8251 |
| 0.1315 | 2.0 | 1050 | 0.1445 | 0.8508 |
| 0.0832 | 3.0 | 1575 | 0.1380 | 0.8591 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
abdusahmbzuai/ft-opensubs-ar-en-marianmt | a01198362382d65f4c2e27aee75abc91819d3aef | 2022-04-10T17:47:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | abdusahmbzuai | null | abdusahmbzuai/ft-opensubs-ar-en-marianmt | 4 | null | transformers | 19,270 | Entry not found |
abdelrahman-alkhodary/distilbert-base-uncased-finetuned-emotion | 1bb9527ca8a893ff23b4a046a5842ba656924c1d | 2022-04-14T14:14:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | abdelrahman-alkhodary | null | abdelrahman-alkhodary/distilbert-base-uncased-finetuned-emotion | 4 | null | transformers | 19,271 | Entry not found |
ChrisZeng/electra-large-discriminator-nli-efl-hateval | a6f0d7f8518f5a6647ae877e70e0ab610037e054 | 2022-04-11T19:23:26.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | ChrisZeng | null | ChrisZeng/electra-large-discriminator-nli-efl-hateval | 4 | null | transformers | 19,272 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electra-large-discriminator-nli-efl-hateval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-large-discriminator-nli-efl-hateval
This model is a fine-tuned version of [ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli](https://huggingface.co/ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.798
- F1: 0.7968
- Loss: 0.4166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|
| 0.4175 | 1.0 | 210 | 0.7317 | 0.7305 | 0.4020 |
| 0.3061 | 2.0 | 420 | 0.768 | 0.7675 | 0.3520 |
| 0.2588 | 3.0 | 630 | 0.79 | 0.7888 | 0.3253 |
| 0.234 | 4.0 | 840 | 0.788 | 0.7877 | 0.3373 |
| 0.2116 | 5.0 | 1050 | 0.804 | 0.8033 | 0.3247 |
| 0.1974 | 6.0 | 1260 | 0.793 | 0.7928 | 0.3400 |
| 0.1807 | 7.0 | 1470 | 0.7973 | 0.7969 | 0.3511 |
| 0.1715 | 8.0 | 1680 | 0.7993 | 0.7989 | 0.3496 |
| 0.1577 | 9.0 | 1890 | 0.8043 | 0.8032 | 0.3507 |
| 0.1469 | 10.0 | 2100 | 0.798 | 0.7970 | 0.3604 |
| 0.1394 | 11.0 | 2310 | 0.7967 | 0.7957 | 0.3734 |
| 0.1322 | 12.0 | 2520 | 0.7913 | 0.7906 | 0.3929 |
| 0.1231 | 13.0 | 2730 | 0.795 | 0.7941 | 0.3954 |
| 0.1189 | 14.0 | 2940 | 0.7977 | 0.7963 | 0.3994 |
| 0.1143 | 15.0 | 3150 | 0.7993 | 0.7980 | 0.3995 |
| 0.1083 | 16.0 | 3360 | 0.7927 | 0.7918 | 0.4125 |
| 0.1079 | 17.0 | 3570 | 0.7993 | 0.7979 | 0.4036 |
| 0.1055 | 18.0 | 3780 | 0.7967 | 0.7956 | 0.4121 |
| 0.1006 | 19.0 | 3990 | 0.7973 | 0.7961 | 0.4152 |
| 0.101 | 20.0 | 4200 | 0.798 | 0.7968 | 0.4166 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Jatin-WIAI/kannada_relevance_clf | 5fab527fdbd703a6a93992435002e8e19216f4f4 | 2022-04-11T08:07:53.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Jatin-WIAI | null | Jatin-WIAI/kannada_relevance_clf | 4 | null | transformers | 19,273 | Entry not found |
Giyaseddin/distilbert-base-uncased-finetuned-short-answer-assessment | 4904ea2889521c31ffcbe63bdfa9d55dcf84e81a | 2022-04-11T15:17:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Short Question Answer Assessment Dataset",
"transformers",
"license:apache-2.0"
] | text-classification | false | Giyaseddin | null | Giyaseddin/distilbert-base-uncased-finetuned-short-answer-assessment | 4 | null | transformers | 19,274 | ---
license: apache-2.0
language: en
library: transformers
other: distilbert
datasets:
- Short Question Answer Assessment Dataset
---
# DistilBERT base uncased model for Short Question Answer Assessment
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model.
This is a classification model that solves Short Question Answer Assessment task, finetuned [pretrained DistilBERT model](https://huggingface.co/distilbert-base-uncased) on
[Question Answer Assessment dataset](#)
## Intended uses & limitations
This can only be used for the kind of questions and answers provided by that are similar to the ones in the dataset of [Banjade et al.](https://aclanthology.org/W16-0520.pdf).
### How to use
You can use this model directly with a :
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="Giyaseddin/distilbert-base-uncased-finetuned-short-answer-assessment", return_all_scores=True)
>>> context = "To rescue a child who has fallen down a well, rescue workers fasten him to a rope, the other end of which is then reeled in by a machine. The rope pulls the child straight upward at steady speed."
>>> question = "How does the amount of tension in the rope compare to the downward force of gravity acting on the child?"
>>> ref_answer = "Since the child is being raised straight upward at a constant speed, the net force on the child is zero and all the forces balance. That means that the tension in the rope balances the downward force of gravity."
>>> student_answer = "The tension force is higher than the force of gravity."
>>>
>>> body = " [SEP] ".join([context, question, ref_answer, student_answer])
>>> raw_results = classifier([body])
>>> raw_results
[[{'label': 'LABEL_0', 'score': 0.0004029414849355817},
{'label': 'LABEL_1', 'score': 0.0005476847873069346},
{'label': 'LABEL_2', 'score': 0.998059093952179},
{'label': 'LABEL_3', 'score': 0.0009902542224153876}]]
>>> _LABELS_ID2NAME = {0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}
>>> results = []
>>> for result in raw_results:
for score in result:
results.append([
{_LABELS_ID2NAME[int(score["label"][-1:])]: "%.2f" % score["score"]}
])
>>> results
[[{'correct': '0.00'}],
[{'correct_but_incomplete': '0.00'}],
[{'contradictory': '1.00'}],
[{'incorrect': '0.00'}]]
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
This bias will also affect all fine-tuned versions of this model.
Also one of the limiations of this model is the length, longer sequences would lead to wrong predictions, due to the pre-processing phase (after concatentating the input sequences, the important student answer might be pruned!)
## Pre-training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Fine-tuning data
The annotated dataset consists of 900 students’ short constructed answers and their correctness in the given context. Four qualitative levels of correctness are defined, correct, correct-but-incomplete, contradictory and Incorrect.
## Training procedure
### Preprocessing
In the preprocessing phase, the following parts are concatenated: _question context_, _question_, _reference_answer_, and _student_answer_ using the separator `[SEP]`.
This makes the full text as:
```
[CLS] Context Sentence [SEP] Question Sentence [SEP] Reference Answer Sentence [SEP] Student Answer Sentence [CLS]
```
The data are splitted according to the following ratio:
- Training set 80%.
- Test set 20%.
Lables are mapped as: `{0: "correct", 1: "correct_but_incomplete", 2: "contradictory", 3: "incorrect"}`
### Fine-tuning
The model was finetuned on GeForce GTX 960M for 20 minuts. The parameters are:
| Parameter | Value |
|:-------------------:|:-----:|
| Learning rate | 5e-5 |
| Weight decay | 0.01 |
| Training batch size | 8 |
| Epochs | 4 |
Here is the scores during the training:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|:----------:|:-------------:|:-----------------:|:----------:|:---------:|:----------:|:--------:|
| 1 | No log | 0.665765 | 0.755330 | 0.743574 | 0.781210 | 0.755330 |
| 2 | 0.932100 | 0.362124 | 0.890355 | 0.889875 | 0.891407 | 0.890355 |
| 3 | 0.364900 | 0.226225 | 0.942132 | 0.941802 | 0.942458 | 0.942132 |
| 3 | 0.176900 | 0.193660 | 0.954315 | 0.954175 | 0.954985 | 0.954315 |
## Evaluation results
When fine-tuned on downstream task of Question Answer Assessment, 4 class classification, this model achieved the following results:
(scores are rounded to 2 floating points)
| | precision | recall | f1-score | support |
|:------------------------:|:----------:|:-------:|:--------:|:-------:|
| _correct_ | 0.938 | 0.989 | 0.963 | 366 |
| _correct_but_incomplete_ | 0.975 | 0.922 | 0.948 | 257 |
| _contradictory_ | 0.946 | 0.938 | 0.942 | 113 |
| _incorrect_ | 0.963 | 0.944 | 0.953 | 249 |
| accuracy | - | - | 0.954 | 985 |
| macro avg | 0.956 | 0.948 | 0.952 | 985 |
| weighted avg | 0.955 | 0.954 | 0.954 | 985 |
Confusion matrix:
| Actual \ Predicted | _correct_ | _correct_but_incomplete_ | _contradictory_ | _incorrect_ |
|:------------------------:|:---------:|:------------------------:|:---------------:|:-----------:|
| _correct_ | 362 | 4 | 0 | 0 |
| _correct_but_incomplete_ | 13 | 237 | 0 | 7 |
| _contradictory_ | 4 | 1 | 106 | 2 |
| _incorrect_ | 7 | 1 | 6 | 235 |
The AUC score is: 'micro'= **0.9695** and 'macro': **0.9659**
|
aherzberg/wav2vec2-base-finetuned-ks | ff70e92dfdf8d3dd34a72483f4d9bdf47308e0f9 | 2022-04-11T18:18:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | aherzberg | null | aherzberg/wav2vec2-base-finetuned-ks | 4 | null | transformers | 19,275 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0919
- Accuracy: 0.9828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6135 | 1.0 | 399 | 0.5203 | 0.9404 |
| 0.2519 | 2.0 | 798 | 0.1896 | 0.9768 |
| 0.1804 | 3.0 | 1197 | 0.1258 | 0.9771 |
| 0.1751 | 4.0 | 1596 | 0.1047 | 0.9812 |
| 0.1628 | 5.0 | 1995 | 0.0919 | 0.9828 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
arampacha/clip-test | 21857b464cf0639ebe413049ae553b2c053f8900 | 2022-04-11T22:57:54.000Z | [
"pytorch",
"tensorboard",
"clip",
"feature-extraction",
"dataset:arampacha/rsicd",
"transformers",
"generated_from_trainer",
"model-index"
] | feature-extraction | false | arampacha | null | arampacha/clip-test | 4 | 1 | transformers | 19,276 | ---
tags:
- generated_from_trainer
datasets:
- arampacha/rsicd
model-index:
- name: clip-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-test
This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the arampacha/rsicd dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hufanyoung/segformer-b0-finetuned-segments-sidewalk-2 | c0e63fbc0ff3a4b19c8a381d5e07df4ecc59a224 | 2022-04-16T02:16:14.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-segmentation | false | hufanyoung | null | hufanyoung/segformer-b0-finetuned-segments-sidewalk-2 | 4 | null | transformers | 19,277 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9327
- Mean Iou: 0.0763
- Mean Accuracy: 0.1260
- Overall Accuracy: 0.5923
- Per Category Iou: [nan, 0.15598158400203022, 0.6233750625153907, 0.0037560777123078824, 0.026995519273962765, 0.027599075064035524, 0.0, 0.0010671752114502803, 0.0, 0.0, 0.503652156236298, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42226922942999406, 0.0, 0.0005751844669974061, 0.0, 0.0, 0.0, 0.015053303500921295, 0.0, 0.0, 0.0, 0.5380260834627074, 0.2004924888392474, 0.07113330974397604, 7.792680075848753e-05, 0.000328515111695138, 0.0025085129486024, 0.0]
- Per Category Accuracy: [nan, 0.17282441039529764, 0.9228726118961177, 0.00408103876916878, 0.028255152590055656, 0.029544523907019265, nan, 0.0010791707371488259, 0.0, 0.0, 0.8681646650418041, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7122996003019028, 0.0, 0.0005801259615003622, 0.0, 0.0, nan, 0.02304960072549563, 0.0, 0.0, 0.0, 0.9348363685365858, 0.2596289024956107, 0.07122958643730157, 8.48216389425569e-05, 0.0005356047133214773, 0.0026059641588056346, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.05
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 3.0624 | 0.03 | 10 | 3.1628 | 0.0726 | 0.1219 | 0.5758 | [nan, 0.0878087898079964, 0.611982872765419, 0.0001999765816897758, 0.006930751650791711, 0.0208104329339671, 0.0, 0.0010631316774049914, 0.0, 0.0, 0.4839157481183621, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.39292052415275885, 0.0, 0.0003268797082673576, 0.0011424188270622699, 0.0, 0.0, 0.004317032040472175, 3.142508260307427e-05, 0.0, 0.0, 0.5537894233680722, 0.28184052017073197, 0.015966383939961543, 0.0002995587926924772, 0.0005713078253519804, 0.0035316933149879015, 0.0] | [nan, 0.09656561651317118, 0.9239613003877697, 0.00021265611687132485, 0.007163978434475801, 0.0222089828684614, nan, 0.0010774805715464, 0.0, 0.0, 0.8583517795809614, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.705533848895072, 0.0, 0.00033222625115695, 0.0011495555325644448, 0.0, nan, 0.008061062548807214, 3.244014792707455e-05, 0.0, 0.0, 0.8715627360179777, 0.3828074002074446, 0.01597238073499201, 0.0003298619292210546, 0.0011388100215281895, 0.003805890022240969, 0.0] |
| 2.6259 | 0.05 | 20 | 2.9327 | 0.0763 | 0.1260 | 0.5923 | [nan, 0.15598158400203022, 0.6233750625153907, 0.0037560777123078824, 0.026995519273962765, 0.027599075064035524, 0.0, 0.0010671752114502803, 0.0, 0.0, 0.503652156236298, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.42226922942999406, 0.0, 0.0005751844669974061, 0.0, 0.0, 0.0, 0.015053303500921295, 0.0, 0.0, 0.0, 0.5380260834627074, 0.2004924888392474, 0.07113330974397604, 7.792680075848753e-05, 0.000328515111695138, 0.0025085129486024, 0.0] | [nan, 0.17282441039529764, 0.9228726118961177, 0.00408103876916878, 0.028255152590055656, 0.029544523907019265, nan, 0.0010791707371488259, 0.0, 0.0, 0.8681646650418041, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7122996003019028, 0.0, 0.0005801259615003622, 0.0, 0.0, nan, 0.02304960072549563, 0.0, 0.0, 0.0, 0.9348363685365858, 0.2596289024956107, 0.07122958643730157, 8.48216389425569e-05, 0.0005356047133214773, 0.0026059641588056346, 0.0] |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-12April2022 | 09806b2ccdb739b5f9c38552c965a6e3a5d44156 | 2022-04-12T06:37:05.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | nntadotzip | null | nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-12April2022 | 4 | null | transformers | 19,278 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-base-cased-IUChatbot-ontologyDts-12April2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts-12April2022
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 294 | 0.7861 |
| 1.2483 | 2.0 | 588 | 0.6727 |
| 1.2483 | 3.0 | 882 | 0.6500 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
smeoni/nbme-Bio_ClinicalBERT | d1e2c9abfa063cc21d09a13b2ae1c11840b0bcb0 | 2022-04-12T20:02:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/nbme-Bio_ClinicalBERT | 4 | null | transformers | 19,279 | Entry not found |
Xuan-Rui/pet-10-p0 | ac00d7e824820c508ab609fbd00ae75dd998d2d1 | 2022-04-13T05:02:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-10-p0 | 4 | null | transformers | 19,280 | Entry not found |
Xuan-Rui/pet-10-p1 | c60dd38672166df1bc0dad988d21df2eeff7076d | 2022-04-13T05:09:22.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-10-p1 | 4 | null | transformers | 19,281 | Entry not found |
Xuan-Rui/pet-10-p2 | 94f3d7b77c994b095657aeaf6537ecfb41aceec3 | 2022-04-13T05:15:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-10-p2 | 4 | null | transformers | 19,282 | Entry not found |
Xuan-Rui/pet-10-p3 | bc1ff3ef7cba8b15894a88f5d2eabcc8ad14f5b8 | 2022-04-13T05:22:06.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-10-p3 | 4 | null | transformers | 19,283 | Entry not found |
Xuan-Rui/pet-10-p4 | 39dcfc2c575b285ad7b786246946fdfd5342b6f4 | 2022-04-13T05:28:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-10-p4 | 4 | null | transformers | 19,284 | Entry not found |
Xuan-Rui/pet-100-p0 | c6b72ab2f4d096f4671ca8105832906d49b56ea2 | 2022-04-13T05:57:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-100-p0 | 4 | null | transformers | 19,285 | Entry not found |
Xuan-Rui/pet-100-p1 | 8b88d4ff8b8faa6458b4b8dc47043197b655ecaa | 2022-04-13T06:03:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-100-p1 | 4 | null | transformers | 19,286 | Entry not found |
Xuan-Rui/pet-100-p2 | b7dddd50c3b45e358cb12967570c3e400bac5c90 | 2022-04-13T06:09:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-100-p2 | 4 | null | transformers | 19,287 | Entry not found |
Xuan-Rui/pet-100-p3 | 0be43ca93c4359de8d5f1acc688a80d2d966f0a6 | 2022-04-13T06:16:06.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-100-p3 | 4 | null | transformers | 19,288 | Entry not found |
Xuan-Rui/pet-100-p4 | dfcc7067b18781060d97e7f9e44bb21cbce483e6 | 2022-04-13T06:22:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-100-p4 | 4 | null | transformers | 19,289 | Entry not found |
Xuan-Rui/pet-100-all | 2febaa41ff8f5488c240d3505a5514a6f2345149 | 2022-04-13T06:28:24.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-100-all | 4 | null | transformers | 19,290 | Entry not found |
Xuan-Rui/pet-1000-p0 | 28faeeee3bf1e5ef9a4b41e627cc91eedf75a4ab | 2022-04-13T06:34:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-1000-p0 | 4 | null | transformers | 19,291 | Entry not found |
Xuan-Rui/pet-1000-p1 | 58dbaf3515fff7fc153591857a348062fd02a092 | 2022-04-13T06:40:47.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-1000-p1 | 4 | null | transformers | 19,292 | Entry not found |
Xuan-Rui/pet-1000-p2 | cf6034b4842b66d59756f9fe247fdb3219680c2a | 2022-04-13T06:46:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-1000-p2 | 4 | null | transformers | 19,293 | Entry not found |
Xuan-Rui/pet-1000-p3 | 9242bc4a18da3083efd456c6d360d9a9dc5d1d98 | 2022-04-13T06:53:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-1000-p3 | 4 | null | transformers | 19,294 | Entry not found |
Xuan-Rui/ipet-10-all | 940707ebe131822912b9d8a44f63a1c0a0c24b46 | 2022-04-13T07:13:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/ipet-10-all | 4 | null | transformers | 19,295 | Entry not found |
Xuan-Rui/ipet-100-all | d099f123b96e9bd417088f1b4c7a0a74f3ddaa62 | 2022-04-13T07:19:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/ipet-100-all | 4 | null | transformers | 19,296 | Entry not found |
Sadhaklal/albert-large-v2-subword-masking-domain-adapted-nbme | 6b6bf9d2071e42c1a811902e84ecb07e27f146a4 | 2022-04-24T06:47:30.000Z | [
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Sadhaklal | null | Sadhaklal/albert-large-v2-subword-masking-domain-adapted-nbme | 4 | null | transformers | 19,297 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: albert-large-v2-subword-masking-domain-adapted-nbme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-subword-masking-domain-adapted-nbme
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7821 | 1.0 | 1790 | 1.3868 |
| 1.2825 | 2.0 | 3580 | 1.2248 |
| 1.1817 | 3.0 | 5370 | 1.1712 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
studio-ousia/mluke-base-lite | acc1f9fd333435f37d6dea294fc38f8939edf0c3 | 2022-05-11T08:17:23.000Z | [
"pytorch",
"luke",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | studio-ousia | null | studio-ousia/mluke-base-lite | 4 | null | transformers | 19,298 | Entry not found |
studio-ousia/mluke-large-lite | b7e2c739af9d1da4ac5dc6cfa5052437bd76db8d | 2022-05-11T08:18:07.000Z | [
"pytorch",
"luke",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | studio-ousia | null | studio-ousia/mluke-large-lite | 4 | null | transformers | 19,299 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.