modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Daryaflp/roberta-retrained_ru_covid_papers | a4e1a6e45437cc378304871efbd79cb18cef5d36 | 2022-03-29T13:30:45.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Daryaflp | null | Daryaflp/roberta-retrained_ru_covid_papers | 3 | null | transformers | 22,100 | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-retrained_ru_covid_papers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-retrained_ru_covid_papers
This model is a fine-tuned version of [Daryaflp/roberta-retrained_ru_covid](https://huggingface.co/Daryaflp/roberta-retrained_ru_covid) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
regel-corpus/hunflair-enhancer | be18c76ed2629914835a9ead9ccb3b5278deee11 | 2022-04-12T15:35:04.000Z | [
"pytorch",
"en",
"flair",
"hunflair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | regel-corpus | null | regel-corpus/hunflair-enhancer | 3 | null | flair | 22,101 | ---
tags:
- flair
- hunflair
- token-classification
- sequence-tagger-model
language: en
widget:
- text: "Isolate an enhancer element located between -89 and -50 bp in PAI-1"
---
## HunFlair model for ENHANCER
[HunFlair](https://github.com/flairNLP/flair/blob/master/resources/docs/HUNFLAIR.md) (biomedical flair) for enhancer entity.
Predicts 1 tag:
| **tag** | **meaning** |
|---------------------------------|-----------|
| Enhancer | DNA enhancer region |
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# for biomedical-specific tokenization:
# from flair.tokenization import SciSpacyTokenizer
# load tagger
tagger = SequenceTagger.load("regel-corpus/hunflair-promoter")
text = "An upstream activator of the mitogen-activated protein (MAP) kinase pathways was used to isolate an enhancer element located between -89 and -50 bp in PAI-1 promoter that was activated by MEKK-1."
# make example sentence
sentence = Sentence(text)
# for biomedical-specific tokenization:
# sentence = Sentence(text, use_tokenizer=SciSpacyTokenizer())
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [18,19,20,21,22,23,24,25,26,27,28,29,30]: "enhancer element located between - 89 and - 50 bp in PAI-1 promoter" [− Labels: Enhancer (0.992)]
```
So, the entity "*enhancer element located between - 89 and - 50 bp in PAI-1*" (labeled as a **enhancer**) is found in the sentence.
Alternatively download all models locally and use the `MultiTagger` class.
```python
from flair.models import MultiTagger
tagger = [
'./models/hunflair-promoter/pytorch_model.bin',
'./models/hunflair-enhancer/pytorch_model.bin',
'./models/hunflair-tfbs/pytorch_model.bin',
]
tagger = MultiTagger.load(['./models/hunflair-'])
tagger.predict(sentence)
```
---
### Cite
Please cite the following paper when using this model.
```
@Article{regel,
author = {Garda, Samuele and Lenihan-Geels, Freyda and Proft, Sebastian and Hochmuth, Stefanie and Schülke, Markus and Seelow, Dominik and Leser, Ulf},
date = {2022},
journaltitle = {Under review},
title = {RegEl corpus: Identifying DNA regulatory elements in the scientific literature},
volume = {-},
groups = {-},
publisher = {-},
}
```
|
regel-corpus/hunflair-tfbs | 83732799ccaac7f70d66c6c1edd529dccf5c2dbb | 2022-04-05T08:55:06.000Z | [
"pytorch",
"en",
"flair",
"hunflair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | regel-corpus | null | regel-corpus/hunflair-tfbs | 3 | null | flair | 22,102 | ---
tags:
- flair
- hunflair
- token-classification
- sequence-tagger-model
language: en
widget:
- text: "It contains a functional GCGGCGGCG Egr-1-binding site"
---
## HunFlair model for Transcription Factor Binding Site (TFBS)
[HunFlair](https://github.com/flairNLP/flair/blob/master/resources/docs/HUNFLAIR.md) (biomedical flair) for TFBS entity.
Predicts 1 tag:
| **tag** | **meaning** |
|---------------------------------|-----------|
| Tfbs | DNA region bound by transcription factor |
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# for biomedical-specific tokenization:
# from flair.tokenization import SciSpacyTokenizer
# load tagger
tagger = SequenceTagger.load("regel-corpus/hunflair-tfbs")
text = "We found that Egr-1 specifically binds to the PTEN 5' untranslated region, which contains a functional GCGGCGGCG Egr-1-binding site."
# make example sentence
sentence = Sentence(text)
# for biomedical-specific tokenization:
# sentence = Sentence(text, use_tokenizer=SciSpacyTokenizer())
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [19,20,21]: "GCGGCGGCG Egr-1-binding site" [− Labels: Tfbs (0.9631)]
```
So, the entity "*GCGGCGGCG Egr-1-binding site*" is found in the sentence.
Alternatively download all models locally and use the `MultiTagger` class.
```python
from flair.models import MultiTagger
tagger = [
'./models/hunflair-promoter/pytorch_model.bin',
'./models/hunflair-enhancer/pytorch_model.bin',
'./models/hunflair-tfbs/pytorch_model.bin',
]
tagger = MultiTagger.load(['./models/hunflair-'])
tagger.predict(sentence)
```
---
### Cite
Please cite the following paper when using this model.
```
@Article{regel,
author = {Garda, Samuele and Lenihan-Geels, Freyda and Proft, Sebastian and Hochmuth, Stefanie and Schülke, Markus and Seelow, Dominik and Leser, Ulf},
date = {2022},
journaltitle = {Under review},
title = {RegEl corpus: Identifying DNA regulatory elements in the scientific literature},
volume = {-},
groups = {-},
publisher = {-},
}
```
|
mengzhouxia/dummy | a5e938854419c8f764ffe1b9f3772d94e1352712 | 2022-03-29T20:00:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mengzhouxia | null | mengzhouxia/dummy | 3 | null | transformers | 22,103 | Entry not found |
CenIA/distillbert-base-spanish-uncased-finetuned-qa-tar | d85d6bc45bc79f345be4ce1c058daaf027696c83 | 2022-03-30T02:28:28.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/distillbert-base-spanish-uncased-finetuned-qa-tar | 3 | null | transformers | 22,104 | Entry not found |
IIC/beto-base-cased-bioasq | 828e8d13940b59ef8a42188776294f2821ae57a3 | 2022-04-02T15:04:24.000Z | [
"pytorch",
"bert",
"question-answering",
"es",
"dataset:IIC/bioasq22_es",
"arxiv:2107.07253",
"transformers",
"model-index",
"autotrain_compatible"
] | question-answering | false | IIC | null | IIC/beto-base-cased-bioasq | 3 | null | transformers | 22,105 | ---
language:
- es
tags:
- question-answering # Example: audio
datasets:
- IIC/bioasq22_es
metrics:
- f1
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: beto-base-cased-bioasq
results:
- task:
type: question-answering # Required. Example: automatic-speech-recognition
name: question-answering # Optional. Example: Speech Recognition
dataset:
type: SQAC # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: IIC/bioasq22_es # Required. Example: Common Voice zh-CN
metrics:
- type: f1
value:
name: f1
---
This model was trained on the [bioasq22_es](https://huggingface.co/datasets/IIC/bioasq22_es) dataset, provided by [IIC](https://www.iic.uam.es/). It is an automatically translated version of the [bioasq](https://huggingface.co/datasets/kroshan/BioASQ) dataset. As for the model, it is a fine-tuned version of [BETO](https://github.com/dccuchile/beto), a spanish BERT developed by the Catholic University of Chile.
For training the model, we followed the recommendations given in [this paper](https://arxiv.org/abs/2107.07253).
You can use the model like this:
```python
from transformers import RobertaTokenizer, RobertaForQuestionAnswering
import torch
tokenizer = RobertaTokenizer.from_pretrained("IIC/beto-base-cased-bioasq")
model = RobertaForQuestionAnswering.from_pretrained("IIC/beto-base-cased-bioasq")
question, text = "Quién es el padre de Luke Skywalker?", "En la famosa película, Darth Veider le dice a Luke Skywalker aquella frase que todos recordamos: yo soy tu padre."
inputs = tokenizer(question, text, return_tensors="pt")
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
abdusahmbzuai/aradia-ctc-v2 | f9554d16910a523706a36f8a49b890f7f8906561 | 2022-03-31T00:08:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | abdusahmbzuai | null | abdusahmbzuai/aradia-ctc-v2 | 3 | null | transformers | 22,106 | Entry not found |
sanchit-gandhi/output_dir | 9b6bd964a041b126b754806cbf1ff99dca8bf16b | 2022-04-04T14:13:05.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/output_dir | 3 | null | transformers | 22,107 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: output_dir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_dir
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5391
- Wer: 1.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3595795069097574e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8561 | 2.24 | 500 | 4.7094 | 1.0737 |
| 4.3008 | 4.48 | 1000 | 4.5391 | 1.6766 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Sakonii/distilgpt2-nepali | c9248aaff7db4e8287bbad9609a0c27dd96e67f0 | 2022-04-03T16:26:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:1911.02116",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Sakonii | null | Sakonii/distilgpt2-nepali | 3 | null | transformers | 22,108 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-nepali
results: []
widget:
- text: "नेपाल र भारतबीच"
example_title: "Example 1"
- text: "प्रधानमन्त्री"
example_title: "Example 2"
- text: "दस वर्ष लामो "
example_title: "Example 3"
- text: "जापानमा आज "
example_title: "Example 4"
- text: "नेपालका धेरैजसो चाडपर्वहरूमध्ये,"
example_title: "Example 5"
---
# distilgpt2-nepali
This model is pre-trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset consisting of over 13 million Nepali text sequences using a Causal language modeling (CLM) objective. Our approach trains a Sentence Piece Model (SPM) for text tokenization similar to [XLM-ROBERTa](https://arxiv.org/abs/1911.02116) and trains [distilgpt2](https://huggingface.co/distilgpt2) for language modeling.
It achieves the following results on the evaluation set:
| Training Loss | Validation Loss | Perplexity
|:-------------:|:---------------:|:----------:|
| 3.3968 | 3.2705 | 26.3245
## Model description
Refer to original [distilgpt2](https://huggingface.co/distilgpt2)
## Intended uses & limitations
This raw model can be used for Nepali text generation and intends to be fine-tuned on Nepali language focused downstream task.
The language model being trained on a data with texts grouped to a block size of 512, it handles text sequence up to 512 tokens and may not perform satisfactorily on shorter sequences.
## Usage
This model can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(42)
>>> generator = pipeline('text-generation', model='Sakonii/distilgpt2-nepali')
>>> generator("नेपालका धेरैजसो चाडपर्वहरूमध्ये,", max_length=30, num_return_sequences=5)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, तिहार र छठपर्व विशेष रूपमा मनाइने भएकाले नेपाली मौलिक पर्व पनि हो । हिन्दू धर्म र संस्कृतिक... काठमाडौं ।'},
{'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, तिहारको मुख्य दिन आज साँझ अस्ताउँदो सूर्यलाई अर्घ्य दिइएको छ । वैदिक विधि...विस्तृतमा पढ्नुस् काठमाडौं । नेपाल चिकित्सक संघका'},
{'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, चाडपर्व, विवाह,... नेपाली काँग्रेसका प्रवक्ता विश्वप्रकाश शर्माले पार्टीभित्र आन्तरिक झगडा हुने निश्चित भएको र गुटबन्दीका कारण चुनावमा हार बेहोर्नु'},
{'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, दशैं नेपालीहरूको मौलिक पर्वका रूपमा मनाउँछन् । नेपालीहरूको दोस्रो महान् पर्व तिहार हो । तिहारले दाजुभाइ तथा दिदीबहिनीहरूको बीचमा प्रगाढ सम्बन्ध स्थापित'},
{'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, माघे संक्रान्ति र माघे संक्रान्तिमा माघे संक्रान्तिमा मात्र नभएर फागुन महिनाभर नै विशेष महत्व रहने गरेको छ । काठमाडौं ।'}]
```
Here is how we can use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('Sakonii/distilgpt2-nepali')
model = AutoModelForCausalLM.from_pretrained('Sakonii/distilgpt2-nepali')
# prepare input
text = "चाहिएको text यता राख्नु होला।"
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
## Training data
This model is trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) language modeling dataset which combines the datasets: [OSCAR](https://huggingface.co/datasets/oscar) , [cc100](https://huggingface.co/datasets/cc100) and a set of scraped Nepali articles on Wikipedia.
As for training the language model, the texts are tokenized using Sentence Piece Model (SPM), a vocabulary size of 24,576 and texts are are grouped to a block of 512 tokens.
## Training procedure
The model is trained with the same configuration as the original [distilgpt2](https://huggingface.co/distilgpt2); but with 512 tokens per instance, 12 instances per batch, and around 188.8K training steps.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Perplexity |
|:-------------:|:-----:|:------:|:---------------:|:----------:|
| 3.7645 | 1.0 | 94395 | 3.6291 | 37.6789 |
| 3.5857 | 2.0 | 188790 | 3.4442 | 31.3182 |
| 3.505 | 3.0 | 283185 | 3.3749 | 29.2214 |
| 3.4688 | 4.0 | 377580 | 3.3439 | 28.3294 |
| 3.3968 | 5.0 | 471975 | 3.2705 | 26.3245 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.11.6
|
yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5 | fb552f5820e882858b68e3a2c9f3773ff2c63ef3 | 2022-03-31T02:22:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5 | 3 | null | transformers | 22,109 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-mnli-rte-wnli-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mnli-rte-wnli-5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4400
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2253 | 1.0 | 16558 | 0.2346 | 0.9139 |
| 0.1667 | 2.0 | 33116 | 0.2973 | 0.9143 |
| 0.1207 | 3.0 | 49674 | 0.3361 | 0.9203 |
| 0.0553 | 4.0 | 66232 | 0.4400 | 0.9209 |
| 0.033 | 5.0 | 82790 | 0.5175 | 0.9203 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.0.0
- Tokenizers 0.11.6
|
yinde/fatimah_fake_news_bert | cf1bb4a98948f6750f189d30ff652256b1c9af96 | 2022-03-30T22:41:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | yinde | null | yinde/fatimah_fake_news_bert | 3 | 1 | transformers | 22,110 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fatimah_fake_news_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fatimah_fake_news_bert
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on [Fake and real dataset on kaggle ]([distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english))
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3298 | 0.06 | 200 | 0.0094 | 0.9987 |
| 0.0087 | 0.11 | 400 | 0.0091 | 0.9988 |
| 0.0126 | 0.17 | 600 | 0.0132 | 0.9965 |
| 0.0081 | 0.22 | 800 | 0.0100 | 0.9987 |
| 0.0132 | 0.28 | 1000 | 0.0086 | 0.9990 |
| 0.0131 | 0.33 | 1200 | 0.0070 | 0.9986 |
| 0.0086 | 0.39 | 1400 | 0.0079 | 0.9990 |
| 0.0041 | 0.45 | 1600 | 0.0057 | 0.9991 |
| 0.0069 | 0.5 | 1800 | 0.0083 | 0.9989 |
| 0.0052 | 0.56 | 2000 | 0.0043 | 0.9993 |
| 0.0 | 0.61 | 2200 | 0.0047 | 0.9993 |
| 0.003 | 0.67 | 2400 | 0.0052 | 0.9994 |
| 0.0126 | 0.72 | 2600 | 0.0028 | 0.9997 |
| 0.0047 | 0.78 | 2800 | 0.0018 | 0.9996 |
| 0.0 | 0.84 | 3000 | 0.0027 | 0.9996 |
| 0.0001 | 0.89 | 3200 | 0.0029 | 0.9996 |
| 0.0079 | 0.95 | 3400 | 0.0010 | 0.9998 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
GleamEyeBeast/ascend_with_english | 6b19a552e7d6ea6a3b7848993be5af8eab682efd | 2022-03-30T23:35:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | GleamEyeBeast | null | GleamEyeBeast/ascend_with_english | 3 | null | transformers | 22,111 | ---
tags:
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: ascend_with_english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ascend_with_english
This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on the timit_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3049
- Wer: 0.2251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.3524 | 0.3016 |
| 0.4246 | 2.0 | 578 | 0.3132 | 0.2607 |
| 0.4246 | 3.0 | 867 | 0.3044 | 0.2373 |
| 0.2008 | 4.0 | 1156 | 0.3075 | 0.2302 |
| 0.2008 | 5.0 | 1445 | 0.3049 | 0.2251 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DioLiu/distilroberta-base-test1 | 6015ff16f9063e9e46d4d7461424e047f8eddc4a | 2022-04-05T12:33:35.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DioLiu | null | DioLiu/distilroberta-base-test1 | 3 | null | transformers | 22,112 | Entry not found |
yaswanth/distilbert-base-uncased_fakenews_identification | 2e1209d7c6797c68cfb2f62c9078e959145ae13d | 2022-04-02T13:18:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | yaswanth | null | yaswanth/distilbert-base-uncased_fakenews_identification | 3 | null | transformers | 22,113 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased_fakenews_identification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fakenews_identification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the below dataset.
https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset
It achieves the following results on the evaluation set:
- Loss: 0.0059
- Accuracy: 0.999
- F1: 0.9990
## Label Description
LABEL_0 - Fake News
LABEL_1 - Real News
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0014 | 1.0 | 1000 | 0.0208 | 0.9965 | 0.9965 |
| 0.0006 | 2.0 | 2000 | 0.0041 | 0.9994 | 0.9994 |
| 0.0006 | 3.0 | 3000 | 0.0044 | 0.9992 | 0.9993 |
| 0.0 | 4.0 | 4000 | 0.0059 | 0.999 | 0.9990 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AnonymousSub/news_fpdm_models_bert | 2a2811431da85047bb02d155e9813dd4aa3a7be0 | 2022-03-31T08:34:08.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/news_fpdm_models_bert | 3 | null | transformers | 22,114 | Entry not found |
raquiba/distilbert-base-uncased-finetuned-cola | 09b78c47fa7f5c994576ef8f31d503fbb7228ecc | 2022-04-15T13:34:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | raquiba | null | raquiba/distilbert-base-uncased-finetuned-cola | 3 | null | transformers | 22,115 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5285049056800905
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6015
- Matthews Correlation: 0.5285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5266 | 1.0 | 535 | 0.5474 | 0.4015 |
| 0.3561 | 2.0 | 1070 | 0.4830 | 0.5214 |
| 0.2416 | 3.0 | 1605 | 0.6015 | 0.5285 |
| 0.1695 | 4.0 | 2140 | 0.7748 | 0.5162 |
| 0.1302 | 5.0 | 2675 | 0.8369 | 0.5268 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
joniponi/multilabel_inpatient_comments_30labels | 351bc8bd73225affdd79545026502cf6e5a58f08 | 2022-03-31T19:44:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | joniponi | null | joniponi/multilabel_inpatient_comments_30labels | 3 | null | transformers | 22,116 | Entry not found |
redwoodresearch/injuriousness-classifier-29apr-baseline | 9ebf70f36e1e53c3d9c321224ab60cc833aa6993 | 2022-03-31T17:24:11.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers"
] | text-classification | false | redwoodresearch | null | redwoodresearch/injuriousness-classifier-29apr-baseline | 3 | null | transformers | 22,117 | Entry not found |
yy642/bert-base-uncased-finetuned-mnli-rte-wnli-3 | ccda4b4171514a74b9b5d5c8a44754da03c770df | 2022-03-31T21:07:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-rte-wnli-3 | 3 | null | transformers | 22,118 | Entry not found |
redwoodresearch/injuriousness-classifier-29apr-tool-assisted | a90f67582f533e8947aae55c2f2c7e2a1168fd42 | 2022-03-31T18:37:24.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers"
] | text-classification | false | redwoodresearch | null | redwoodresearch/injuriousness-classifier-29apr-tool-assisted | 3 | null | transformers | 22,119 | Entry not found |
osanseviero/test_model_bertmesh | b0d71f7607cfb63d379442b47a791b656d0b67a9 | 2022-03-31T20:35:05.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | osanseviero | null | osanseviero/test_model_bertmesh | 3 | null | transformers | 22,120 | ---
license: apache-2.0
---
# WellcomeBertMesh
WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.
# Model description
The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.
WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.
We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs.
The model achieves 63% micro f1 with a 0.5 threshold for all labels.
The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger
# How to use
⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models.
You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`.
```
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Wellcome/WellcomeBertMesh"
)
model = AutoModel.from_pretrained(
"Wellcome/WellcomeBertMesh",
trust_remote_code=True
)
text = "This grant is about malaria and not about HIV."
inputs = tokenizer([text], padding="max_length")
labels = model(**inputs, return_labels=True)
print(labels)
```
You can inspect the model code if you navigate to the files and see `model.py`. |
benwoodyear/byt5-small-cryptic-crosswords | 1f4fd2dea7f699fe7d0a821e862ef9a34af630ef | 2022-03-31T22:07:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | benwoodyear | null | benwoodyear/byt5-small-cryptic-crosswords | 3 | null | transformers | 22,121 | Entry not found |
scasutt/wav2vec2-large-xlsr-53_toy_train_data_random_low_pass | 21b1eb11c63cb33f1f9cfa3a4b931d84eae22697 | 2022-04-01T11:40:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_toy_train_data_random_low_pass | 3 | null | transformers | 22,122 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_random_low_pass
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_random_low_pass
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6572
- Wer: 0.4973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0834 | 2.1 | 500 | 3.4478 | 1.0 |
| 1.0735 | 4.2 | 1000 | 0.9113 | 0.7815 |
| 0.5516 | 6.3 | 1500 | 0.7035 | 0.6081 |
| 0.4023 | 8.4 | 2000 | 0.6647 | 0.5649 |
| 0.3423 | 10.5 | 2500 | 0.6613 | 0.5450 |
| 0.2938 | 12.6 | 3000 | 0.6967 | 0.5318 |
| 0.2902 | 14.7 | 3500 | 0.6430 | 0.5089 |
| 0.2372 | 16.81 | 4000 | 0.6653 | 0.5045 |
| 0.2148 | 18.91 | 4500 | 0.6572 | 0.4973 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.12.0
|
joniponi/multilabel_inpatient_comments_10labels | 6ef754ce10a1e15c2e888acc2f4cc9268528e764 | 2022-04-01T07:23:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | joniponi | null | joniponi/multilabel_inpatient_comments_10labels | 3 | null | transformers | 22,123 | Entry not found |
Timur1984/sbert_large_nlu_ru-finetuned-squad-full | 5df2ff5257ad989378a849ce2c888ae56544ecd6 | 2022-04-07T11:43:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Timur1984 | null | Timur1984/sbert_large_nlu_ru-finetuned-squad-full | 3 | null | transformers | 22,124 | ---
tags:
- generated_from_trainer
model-index:
- name: sbert_large_nlu_ru-finetuned-squad-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large_nlu_ru-finetuned-squad-full
This model is a fine-tuned version of [ruselkomp/sbert_large_nlu_ru-finetuned-squad-full](https://huggingface.co/ruselkomp/sbert_large_nlu_ru-finetuned-squad-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.5747 |
| No log | 2.0 | 34 | 0.6119 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.1.dev0
- Tokenizers 0.11.6
|
AvengingPrime/Argument_Generation_GPT-2_model | c415a8501a393b2d39e079e01f04496786909c9c | 2022-04-01T13:56:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | AvengingPrime | null | AvengingPrime/Argument_Generation_GPT-2_model | 3 | null | transformers | 22,125 | Entry not found |
CenIA/bert-base-spanish-wwm-uncased-finetuned-qa-tar | 963f3aa9f573b2b9c5bec3f523db69e889ef91cd | 2022-04-01T19:53:30.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/bert-base-spanish-wwm-uncased-finetuned-qa-tar | 3 | null | transformers | 22,126 | Entry not found |
vicl/canine-c-finetuned-cola | 420ef687b35aaf46bc335209d21d10f15ca9ccba | 2022-04-01T17:38:35.000Z | [
"pytorch",
"tensorboard",
"canine",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | vicl | null | vicl/canine-c-finetuned-cola | 3 | null | transformers | 22,127 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: canine-c-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0990441507705203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-c-finetuned-cola
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6246
- Matthews Correlation: 0.0990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6142 | 1.0 | 535 | 0.6268 | 0.0 |
| 0.607 | 2.0 | 1070 | 0.6234 | 0.0 |
| 0.6104 | 3.0 | 1605 | 0.6226 | 0.0 |
| 0.5725 | 4.0 | 2140 | 0.6246 | 0.0990 |
| 0.5426 | 5.0 | 2675 | 0.6866 | 0.0495 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
birgermoell/psst-libri960_big | 49bf1ae1bd12b98521f4b647d22b01c3ecfd2d57 | 2022-04-01T20:17:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-libri960_big | 3 | null | transformers | 22,128 | pssteval INFO: ASR metrics for split `valid` FER: 9.8% PER: 20.9% |
youssefadarrab/TP_NLP_SNLI_Adarrab_Baziz_Malige | 75ba6c012559001d241904d8d9ddebc508ebc82c | 2022-04-02T00:40:26.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | youssefadarrab | null | youssefadarrab/TP_NLP_SNLI_Adarrab_Baziz_Malige | 3 | null | transformers | 22,129 | # CentraleSupelec - Natural language processing
# Practical session n°7
## Natural Language Inferencing (NLI):
(NLI) is a classical NLP (Natural Language Processing) problem that involves taking two sentences (the premise and the hypothesis ), and deciding how they are related (if the premise *entails* the hypothesis, *contradicts* it, or *neither*).
Ex:
| Premise | Label | Hypothesis |
| --- | --- | --- |
| A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. |
| An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. |
| A soccer game with multiple males playing. | entailment | Some men are playing a sport. |
### Stanford NLI (SNLI) corpus
In this labwork, I propose to use the Stanford NLI (SNLI) corpus ( https://nlp.stanford.edu/projects/snli/ ), available in the *Datasets* library by Huggingface.
from datasets import load_dataset
snli = load_dataset("snli")
#Removing sentence pairs with no label (-1)
snli = snli.filter(lambda example: example['label'] != -1)
## Quick summary of the model
This is the model from : Youssef Adarrab, Othmane Baziz and Alain Malige
- Fist we import the corpus and do some visualization
- Second we apply DistilBert for sequence classification
- We illustrate through our work the code used for training, to obtain better results, one should run the training on more epochs |
AnonymousSub/fpdm_triplet_roberta_FT_newsqa | a0de70c0100fcad6d2bdde65d74a9ffeb05a14e7 | 2022-04-01T21:51:02.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/fpdm_triplet_roberta_FT_newsqa | 3 | null | transformers | 22,130 | Entry not found |
AnonymousSub/fpdm_hier_roberta_FT_newsqa | 0efbe24cb398a47499e4273209da732a6d0a76d1 | 2022-04-01T21:54:57.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/fpdm_hier_roberta_FT_newsqa | 3 | null | transformers | 22,131 | Entry not found |
AnonymousSub/fpdm_roberta_FT_newsqa | b22a470f10d39a453b2c26d309948f0dc749aab3 | 2022-04-01T21:58:27.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/fpdm_roberta_FT_newsqa | 3 | null | transformers | 22,132 | Entry not found |
BigSalmon/Points4 | 5065b407917739aa91ead3a6cf13be37425b65a4 | 2022-04-02T03:04:08.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/Points4 | 3 | null | transformers | 22,133 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/Points4")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/Points4")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27
Keywords to sentences or sentence. |
DMetaSoul/sbert-chinese-qmc-finance-v1-distill | 7db2d26cdc795edeef0e56f152fc00743165f85b | 2022-04-02T10:07:58.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese"
] | sentence-similarity | false | DMetaSoul | null | DMetaSoul/sbert-chinese-qmc-finance-v1-distill | 3 | null | sentence-transformers | 22,134 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-qmc-finance-v1-distill
此模型是之前[开源金融问题匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-qmc-finance-v1)的蒸馏轻量化版本(仅4层 BERT),适用于**金融领域的问题匹配**场景,比如:
- 8千日利息400元? VS 10000元日利息多少钱
- 提前还款是按全额计息 VS 还款扣款不成功怎么还款?
- 为什么我借钱交易失败 VS 刚申请的借款为什么会失败
离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 5% 左右(具体结果详见下文评估小节)。
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["到期不能按时还款怎么办", "剩余欠款还有多少?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-qmc-finance-v1-distill')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["到期不能按时还款怎么办", "剩余欠款还有多少?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-qmc-finance-v1-distill')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-qmc-finance-v1-distill')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
这里主要跟蒸馏前对应的 teacher 模型作了对比:
*性能:*
| | Teacher | Student | Gap |
| ---------- | --------------------- | ------------------- | ----- |
| Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x |
| Cost | 23s | 12s | -47% |
| Latency | 38ms | 20ms | -47% |
| Throughput | 418 sentence/s | 791 sentence/s | 1.9x |
*精度:*
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** |
| -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- |
| **Teacher** | 77.40% | 74.55% | 36.00% | 75.75% | 73.24% | 11.58% | 54.75% | 57.61% |
| **Student** | 75.02% | 71.99% | 32.40% | 67.06% | 66.35% | 7.57% | 49.26% | 52.80% |
| **Gap** (abs.) | - | - | - | - | - | - | - | -4.81% |
*基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256*
## Citing & Authors
E-mail: [email protected] |
itaihay/wav2vec_asr_swbd_10_epochs | 93a7d678966cd9af30dd5d12f2066feb687dc0d5 | 2022-04-05T19:02:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | itaihay | null | itaihay/wav2vec_asr_swbd_10_epochs | 3 | null | transformers | 22,135 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec_asr_swbd_10_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec_asr_swbd_10_epochs
This model is a fine-tuned version of [facebook/wav2vec2-large-robust-ft-swbd-300h](https://huggingface.co/facebook/wav2vec2-large-robust-ft-swbd-300h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 0.9627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 1.0682 | 0.22 | 5000 | 0.7383 | 0.4431 |
| 0.9143 | 0.44 | 10000 | 0.7182 | 0.4058 |
| 0.8905 | 0.66 | 15000 | 0.6291 | 0.3987 |
| 0.8354 | 0.87 | 20000 | 0.5976 | 0.3954 |
| 0.7749 | 1.09 | 25000 | 0.5773 | 0.3901 |
| 0.7336 | 1.31 | 30000 | 0.5812 | 0.3871 |
| 0.7314 | 1.53 | 35000 | 0.5802 | 0.3895 |
| 0.0 | 1.75 | 40000 | nan | 0.9627 |
| 0.0 | 1.97 | 45000 | nan | 0.9627 |
| 0.0 | 2.19 | 50000 | nan | 0.9627 |
| 0.0 | 2.4 | 55000 | nan | 0.9627 |
| 0.0 | 2.62 | 60000 | nan | 0.9627 |
| 0.0 | 2.84 | 65000 | nan | 0.9627 |
| 0.0 | 3.06 | 70000 | nan | 0.9627 |
| 0.0 | 3.28 | 75000 | nan | 0.9627 |
| 0.0 | 3.5 | 80000 | nan | 0.9627 |
| 0.0 | 3.72 | 85000 | nan | 0.9627 |
| 0.0 | 3.93 | 90000 | nan | 0.9627 |
| 0.0 | 4.15 | 95000 | nan | 0.9627 |
| 0.0 | 4.37 | 100000 | nan | 0.9627 |
| 0.0 | 4.59 | 105000 | nan | 0.9627 |
| 0.0 | 4.81 | 110000 | nan | 0.9627 |
| 0.0 | 5.03 | 115000 | nan | 0.9627 |
| 0.0 | 5.25 | 120000 | nan | 0.9627 |
| 0.0 | 5.46 | 125000 | nan | 0.9627 |
| 0.0 | 5.68 | 130000 | nan | 0.9627 |
| 0.0 | 5.9 | 135000 | nan | 0.9627 |
| 0.0 | 6.12 | 140000 | nan | 0.9627 |
| 0.0 | 6.34 | 145000 | nan | 0.9627 |
| 0.0 | 6.56 | 150000 | nan | 0.9627 |
| 0.0 | 6.78 | 155000 | nan | 0.9627 |
| 0.0 | 7.0 | 160000 | nan | 0.9627 |
| 0.0 | 7.21 | 165000 | nan | 0.9627 |
| 0.0 | 7.43 | 170000 | nan | 0.9627 |
| 0.0 | 7.65 | 175000 | nan | 0.9627 |
| 0.0 | 7.87 | 180000 | nan | 0.9627 |
| 0.0 | 8.09 | 185000 | nan | 0.9627 |
| 0.0 | 8.31 | 190000 | nan | 0.9627 |
| 0.0 | 8.53 | 195000 | nan | 0.9627 |
| 0.0 | 8.74 | 200000 | nan | 0.9627 |
| 0.0 | 8.96 | 205000 | nan | 0.9627 |
| 0.0 | 9.18 | 210000 | nan | 0.9627 |
| 0.0 | 9.4 | 215000 | nan | 0.9627 |
| 0.0 | 9.62 | 220000 | nan | 0.9627 |
| 0.0 | 9.84 | 225000 | nan | 0.9627 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
mainuliitkgp/ROBERTa_fake_news_classification | 32b250db35ee7a3cee6368a15d12d1ea73bb5bbb | 2022-04-02T18:33:14.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | mainuliitkgp | null | mainuliitkgp/ROBERTa_fake_news_classification | 3 | null | transformers | 22,136 | Entry not found |
vocab-transformers/distilbert-mlm-1000k | 8981c8d20ccc34a7886fd2b5a0ad784cda9425ae | 2022-04-02T21:16:58.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vocab-transformers | null | vocab-transformers/distilbert-mlm-1000k | 3 | null | transformers | 22,137 | distilbert-base-uncased trained for 1000K steps with batch size 64 on C4, MSMARCO, Wikipedia, S2ORC, News
|
vicl/distilbert-base-uncased-finetuned-stsb | ae4a58008cb0d7be2d600695670e59cff92d2891 | 2022-04-02T22:24:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | vicl | null | vicl/distilbert-base-uncased-finetuned-stsb | 3 | null | transformers | 22,138 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: distilbert-base-uncased-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8636303639161342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5644
- Pearson: 0.8666
- Spearmanr: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.6366 | 0.8537 | 0.8516 |
| 1.0464 | 2.0 | 720 | 0.6171 | 0.8632 | 0.8626 |
| 0.4002 | 3.0 | 1080 | 0.6082 | 0.8663 | 0.8643 |
| 0.4002 | 4.0 | 1440 | 0.5644 | 0.8666 | 0.8636 |
| 0.2479 | 5.0 | 1800 | 0.5780 | 0.8654 | 0.8624 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
vicl/canine-s-finetuned-cola | e3d65069ca29ae4c5cbf72b8a95fdf8696370330 | 2022-04-02T23:01:51.000Z | [
"pytorch",
"tensorboard",
"canine",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | vicl | null | vicl/canine-s-finetuned-cola | 3 | null | transformers | 22,139 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: canine-s-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.059386434587477076
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-s-finetuned-cola
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6653
- Matthews Correlation: 0.0594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6132 | 1.0 | 535 | 0.6289 | 0.0 |
| 0.6062 | 2.0 | 1070 | 0.6179 | 0.0 |
| 0.6122 | 3.0 | 1605 | 0.6160 | 0.0 |
| 0.5939 | 4.0 | 2140 | 0.6159 | 0.0 |
| 0.5721 | 5.0 | 2675 | 0.6653 | 0.0594 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/clortown-elonmusk-stephencurry30 | e25f9e18ecdef4e7921d3afe34dc1c15dc676d76 | 2022-04-02T23:03:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/clortown-elonmusk-stephencurry30 | 3 | null | transformers | 22,140 | ---
language: en
thumbnail: http://www.huggingtweets.com/clortown-elonmusk-stephencurry30/1648940589601/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488574779351187458/RlIQNUFG_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484233608793518081/tOID8aXq_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & yeosang elf agenda & Stephen Curry</div>
<div style="text-align: center; font-size: 14px;">@clortown-elonmusk-stephencurry30</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & yeosang elf agenda & Stephen Curry.
| Data | Elon Musk | yeosang elf agenda | Stephen Curry |
| --- | --- | --- | --- |
| Tweets downloaded | 221 | 3143 | 3190 |
| Retweets | 7 | 541 | 384 |
| Short tweets | 62 | 463 | 698 |
| Tweets kept | 152 | 2139 | 2108 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2sqcbnn5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clortown-elonmusk-stephencurry30's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mq1ftjh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mq1ftjh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/clortown-elonmusk-stephencurry30')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jorge-henao/gpt2-small-spanish-disco-poetry-wt | f096b2a55c3f3ff8804ef038df65ea15d042db2e | 2022-04-03T00:04:31.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | jorge-henao | null | jorge-henao/gpt2-small-spanish-disco-poetry-wt | 3 | null | transformers | 22,141 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-spanish-disco-poetry-wt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-spanish-disco-poetry-wt
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
munozariasjm/writter_bert_hep | a5185f86e54c5d5898f6f898e7b585e5d1ed8ebc | 2022-06-16T00:56:21.000Z | [
"pytorch",
"onnx",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | munozariasjm | null | munozariasjm/writter_bert_hep | 3 | null | transformers | 22,142 | Entry not found |
reichenbach/fake-news-detector | 3dc79ce22619257ccbc4fdf4833f468bcfaff778 | 2022-04-03T12:02:48.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | reichenbach | null | reichenbach/fake-news-detector | 3 | null | transformers | 22,143 | Entry not found |
scasutt/wav2vec2-large-xlsr-53_toy_train_data_random_noise | 8c3c9dd735edd737404b7d210d38af2446fba918 | 2022-04-03T16:23:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_toy_train_data_random_noise | 3 | null | transformers | 22,144 | Entry not found |
alina1997/de_en_translation | 88f39ce5c18a8f6fa2a807a1ae418333ef93d534 | 2022-05-10T19:43:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alina1997 | null | alina1997/de_en_translation | 3 | null | transformers | 22,145 | |
Zarkit/Gpt2ESP-finetuned-p | 957543d4107a7d6d84cee894029b824dd14da6a7 | 2022-04-04T15:44:29.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Zarkit | null | Zarkit/Gpt2ESP-finetuned-p | 3 | null | transformers | 22,146 | Entry not found |
tartuNLP/m2m100_418M_smugri | 2d844b861f6f87161b4e4d1fbb0dde3ad1064142 | 2022-04-12T06:38:16.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | tartuNLP | null | tartuNLP/m2m100_418M_smugri | 3 | null | transformers | 22,147 | ---
license: mit
language:
- en
widget:
- text: "Let us translate some text from Livonian to Võro!"
---
# NMT for Finno-Ugric Languages
This is an NMT system for translating between Võro, Livonian, North Sami, South Sami as well as Estonian, Finnish, Latvian and English. It was created by fine-tuning Facebook's m2m100-418M on the liv4ever and smugri datasets.
## Tokenizer
Four language codes were added to the tokenizer: __liv__, __vro__, __sma__ and __sme__. Currently the m2m100 tokenizer loads the list of languages from a hard-coded list, so it has to be updated after loading; see the code example below.
## Usage example
Install the transformers and sentencepiece libraries: `pip install sentencepiece transformers`
```from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("tartuNLP/m2m100_418M_smugri")
#Fix the language codes in the tokenizer
tokenizer.id_to_lang_token = dict(list(tokenizer.id_to_lang_token.items()) + list(tokenizer.added_tokens_decoder.items()))
tokenizer.lang_token_to_id = dict(list(tokenizer.lang_token_to_id.items()) + list(tokenizer.added_tokens_encoder.items()))
tokenizer.lang_code_to_token = { k.replace("_", ""): k for k in tokenizer.additional_special_tokens }
tokenizer.lang_code_to_id = { k.replace("_", ""): v for k, v in tokenizer.lang_token_to_id.items() }
model = AutoModelForSeq2SeqLM.from_pretrained("tartuNLP/m2m100_418M_smugri")
tokenizer.src_lang = 'liv'
encoded_src = tokenizer("Līvõ kēļ jelāb!", return_tensors="pt")
encoded_out = model.generate(**encoded_src, forced_bos_token_id = tokenizer.get_lang_id("sme"))
print(tokenizer.batch_decode(encoded_out, skip_special_tokens=True))
```
The output is `Livčča giella eallá.` |
Yaxin/ernie_2.0_skep_large_en | 89872abfa3d1b390c5cf87911b6e04c1ccb51fa9 | 2022-04-04T14:23:29.000Z | [
"pytorch",
"bert",
"en",
"transformers"
] | null | false | Yaxin | null | Yaxin/ernie_2.0_skep_large_en | 3 | null | transformers | 22,148 | ---
language: en
---
# SKEP-
## Introduction
SKEP (SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis) is proposed by Baidu in 2020,
SKEP propose Sentiment Knowledge Enhanced Pre-training for sentiment analysis. Sentiment masking and three sentiment pre-training objectives are designed to incorporate various types of knowledge for pre-training model.
More detail: https://aclanthology.org/2020.acl-main.374.pdf
## ⚠️ attention
Compared with the full version of the ernie_2.0_skep_large_en, we lost the task_embeddings part in order to adapt to the Bert framework.
## Released Model Info
|Model Name|Language|Model Structure|
|:---:|:---:|:---:|
|skep-ernie2-bert-large| English |Layer:24, Hidden:1024, Heads:24|
This released pytorch model is converted from the officially released PaddlePaddle SKEP model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle SKEP repo:
1. https://github.com/PaddlePaddle/PaddleNLP/blob/develop/paddlenlp/transformers/skep
2. https://github.com/baidu/Senta
- Pytorch Conversion repo: Not released yet
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Yaxin/ernie_2.0_skep_large_en")
model = AutoModel.from_pretrained("Yaxin/ernie_2.0_skep_large_en")
```
## Citation
```bibtex
@article{tian2020skep,
title={SKEP: Sentiment knowledge enhanced pre-training for sentiment analysis},
author={Tian, Hao and Gao, Can and Xiao, Xinyan and Liu, Hao and He, Bolei and Wu, Hua and Wang, Haifeng and Wu, Feng},
journal={arXiv preprint arXiv:2005.05635},
year={2020}
}
```
```
reference:
https://github.com/nghuyong/ERNIE-Pytorch
```
|
Sevil/t5-small-finetuned-wikihow_3epoch_v2 | b8ad302cba9cece89eccfa4fdf85519f1d748184 | 2022-04-04T20:03:46.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wikihow",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Sevil | null | Sevil/t5-small-finetuned-wikihow_3epoch_v2 | 3 | null | transformers | 22,149 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch_v2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 27.48
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch_v2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2758
- Rouge1: 27.48
- Rouge2: 10.7621
- Rougel: 23.4136
- Rougelsum: 26.7923
- Gen Len: 18.5424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.8423 | 0.13 | 5000 | 2.5715 | 25.2685 | 8.6964 | 21.229 | 24.5773 | 18.4479 |
| 2.7345 | 0.25 | 10000 | 2.5236 | 24.982 | 8.7823 | 21.1609 | 24.3066 | 18.3631 |
| 2.6811 | 0.38 | 15000 | 2.4911 | 25.7585 | 9.3372 | 21.8388 | 25.1052 | 18.3997 |
| 2.6611 | 0.51 | 20000 | 2.4510 | 26.022 | 9.4708 | 22.0899 | 25.3236 | 18.5472 |
| 2.6133 | 0.64 | 25000 | 2.4272 | 26.3481 | 9.6769 | 22.4484 | 25.7046 | 18.3863 |
| 2.6083 | 0.76 | 30000 | 2.4108 | 26.4131 | 9.6643 | 22.4021 | 25.6958 | 18.5585 |
| 2.5842 | 0.89 | 35000 | 2.3866 | 26.2852 | 9.7505 | 22.4525 | 25.5908 | 18.5485 |
| 2.5554 | 1.02 | 40000 | 2.3816 | 26.3018 | 9.7218 | 22.3673 | 25.6515 | 18.4912 |
| 2.4895 | 1.14 | 45000 | 2.3730 | 26.6439 | 9.9665 | 22.6593 | 25.9521 | 18.5635 |
| 2.4781 | 1.27 | 50000 | 2.3541 | 26.8488 | 10.0364 | 22.8202 | 26.1598 | 18.4254 |
| 2.4821 | 1.4 | 55000 | 2.3440 | 26.9511 | 10.2079 | 23.0133 | 26.2821 | 18.5712 |
| 2.4593 | 1.53 | 60000 | 2.3370 | 26.945 | 10.3123 | 22.9245 | 26.2493 | 18.5978 |
| 2.4521 | 1.65 | 65000 | 2.3309 | 26.9652 | 10.314 | 22.9657 | 26.298 | 18.4837 |
| 2.4523 | 1.78 | 70000 | 2.3249 | 27.0548 | 10.4204 | 23.1286 | 26.379 | 18.4717 |
| 2.4563 | 1.91 | 75000 | 2.3079 | 27.4563 | 10.6452 | 23.3985 | 26.7812 | 18.5642 |
| 2.4229 | 2.03 | 80000 | 2.3115 | 27.0538 | 10.44 | 22.9957 | 26.349 | 18.5914 |
| 2.3694 | 2.16 | 85000 | 2.3017 | 27.332 | 10.6556 | 23.3135 | 26.629 | 18.459 |
| 2.3749 | 2.29 | 90000 | 2.2941 | 27.3294 | 10.5967 | 23.2039 | 26.6411 | 18.5179 |
| 2.3779 | 2.42 | 95000 | 2.2891 | 27.3725 | 10.6539 | 23.3455 | 26.707 | 18.5367 |
| 2.3638 | 2.54 | 100000 | 2.2895 | 27.3487 | 10.6738 | 23.2894 | 26.681 | 18.6128 |
| 2.3549 | 2.67 | 105000 | 2.2833 | 27.408 | 10.6903 | 23.3575 | 26.7137 | 18.6035 |
| 2.3652 | 2.8 | 110000 | 2.2788 | 27.561 | 10.8202 | 23.4672 | 26.8584 | 18.5565 |
| 2.3553 | 2.93 | 115000 | 2.2758 | 27.48 | 10.7621 | 23.4136 | 26.7923 | 18.5424 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
reichenbach/fake-news-detector-v3 | 8f50daf1275587a8df0f9556bea0e2e9195f9d94 | 2022-04-04T17:54:30.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | reichenbach | null | reichenbach/fake-news-detector-v3 | 3 | null | transformers | 22,150 | Entry not found |
GleamEyeBeast/ascend_with_timit | 913fa7ff91f8b441b4829f615a63ad1a9f6440e1 | 2022-04-05T03:08:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | GleamEyeBeast | null | GleamEyeBeast/ascend_with_timit | 3 | null | transformers | 22,151 | ---
tags:
- generated_from_trainer
model-index:
- name: ascend_with_timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ascend_with_timit
This model is a fine-tuned version of [GleamEyeBeast/ascend_with_timit](https://huggingface.co/GleamEyeBeast/ascend_with_timit) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8013
- Wer: 0.4781
- Cer: 0.1727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 2.4026 | 1.0 | 890 | 1.3419 | 0.9083 | 0.3670 |
| 1.1926 | 2.0 | 1780 | 0.9730 | 0.6491 | 0.2585 |
| 0.9104 | 3.0 | 2670 | 0.8483 | 0.5368 | 0.1963 |
| 0.7718 | 4.0 | 3560 | 0.8122 | 0.4913 | 0.1791 |
| 0.7013 | 5.0 | 4450 | 0.8013 | 0.4781 | 0.1727 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mgreenbe/607-demo-model | ed1f7c156345c7b5c4e4caf93ed716e6ad656be3 | 2022-04-04T17:35:06.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:yelp_polarity",
"transformers",
"tag2",
"license:apache-2.0"
] | text-classification | false | mgreenbe | null | mgreenbe/607-demo-model | 3 | null | transformers | 22,152 | ---
language:
- en
tags:
- text-classification
- tag2
license: apache-2.0
datasets:
- yelp_polarity
metrics:
- accuracy
---
Demo model for predicting the polarity of Yelp reviews.
Trained for 1 epoch on 4096 reviews. |
Sevil/t5-small-finetuned-cnndm_3epoch_v2 | 6473364728a19caa775f6f10426d44aba0db4436 | 2022-04-05T17:13:07.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Sevil | null | Sevil/t5-small-finetuned-cnndm_3epoch_v2 | 3 | null | transformers | 22,153 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm_3epoch_v2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.7696
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm_3epoch_v2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6070
- Rouge1: 24.7696
- Rouge2: 11.9467
- Rougel: 20.4495
- Rougelsum: 23.3341
- Gen Len: 18.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9695 | 0.07 | 5000 | 1.7781 | 24.2253 | 11.472 | 20.0367 | 22.8469 | 18.9962 |
| 1.9536 | 0.14 | 10000 | 1.7575 | 24.2983 | 11.469 | 20.0054 | 22.9144 | 18.9995 |
| 1.9452 | 0.21 | 15000 | 1.7406 | 24.2068 | 11.4601 | 20.0021 | 22.8375 | 19.0 |
| 1.931 | 0.28 | 20000 | 1.7302 | 24.1589 | 11.4183 | 19.9736 | 22.7804 | 18.9996 |
| 1.9182 | 0.35 | 25000 | 1.7381 | 24.1634 | 11.5435 | 19.9643 | 22.7371 | 18.9999 |
| 1.9072 | 0.42 | 30000 | 1.7239 | 24.4401 | 11.6323 | 20.1243 | 22.9468 | 19.0 |
| 1.9027 | 0.49 | 35000 | 1.7162 | 24.1801 | 11.4788 | 20.0011 | 22.832 | 18.9996 |
| 1.8962 | 0.56 | 40000 | 1.7060 | 24.4153 | 11.6275 | 20.1742 | 23.0865 | 18.9998 |
| 1.8905 | 0.63 | 45000 | 1.7004 | 24.1446 | 11.5402 | 19.9986 | 22.7949 | 18.9983 |
| 1.8764 | 0.7 | 50000 | 1.6876 | 24.342 | 11.5448 | 20.0993 | 22.9509 | 18.9993 |
| 1.8772 | 0.77 | 55000 | 1.6879 | 24.3596 | 11.6063 | 20.1592 | 22.9966 | 19.0 |
| 1.8669 | 0.84 | 60000 | 1.6776 | 24.6201 | 11.6668 | 20.2639 | 23.201 | 18.9994 |
| 1.8692 | 0.91 | 65000 | 1.6838 | 24.2924 | 11.6129 | 20.1071 | 22.9112 | 18.9997 |
| 1.8552 | 0.98 | 70000 | 1.6885 | 24.2878 | 11.6773 | 20.1272 | 22.8797 | 18.9992 |
| 1.8205 | 1.04 | 75000 | 1.6717 | 24.5579 | 11.6421 | 20.2593 | 23.1442 | 19.0 |
| 1.8074 | 1.11 | 80000 | 1.6604 | 24.495 | 11.6542 | 20.1854 | 23.1091 | 18.9996 |
| 1.7951 | 1.18 | 85000 | 1.6705 | 24.4504 | 11.6601 | 20.2185 | 23.0597 | 18.9999 |
| 1.7937 | 1.25 | 90000 | 1.6645 | 24.5535 | 11.6921 | 20.2087 | 23.1099 | 18.9999 |
| 1.8017 | 1.32 | 95000 | 1.6647 | 24.5179 | 11.8005 | 20.2903 | 23.13 | 18.9993 |
| 1.7918 | 1.39 | 100000 | 1.6568 | 24.518 | 11.7528 | 20.222 | 23.0767 | 18.9991 |
| 1.7985 | 1.46 | 105000 | 1.6588 | 24.4636 | 11.636 | 20.1038 | 23.032 | 19.0 |
| 1.7944 | 1.53 | 110000 | 1.6498 | 24.6611 | 11.78 | 20.3059 | 23.2404 | 18.9999 |
| 1.7934 | 1.6 | 115000 | 1.6551 | 24.7267 | 11.823 | 20.3377 | 23.273 | 18.9997 |
| 1.7764 | 1.67 | 120000 | 1.6467 | 24.5052 | 11.8052 | 20.2617 | 23.1228 | 18.9996 |
| 1.7846 | 1.74 | 125000 | 1.6489 | 24.5423 | 11.8407 | 20.3464 | 23.1433 | 18.9999 |
| 1.7799 | 1.81 | 130000 | 1.6438 | 24.4915 | 11.7827 | 20.2592 | 23.1299 | 18.9999 |
| 1.7806 | 1.88 | 135000 | 1.6353 | 24.7804 | 11.9212 | 20.4678 | 23.359 | 19.0 |
| 1.7784 | 1.95 | 140000 | 1.6338 | 24.7892 | 11.8836 | 20.4227 | 23.373 | 18.9997 |
| 1.7551 | 2.02 | 145000 | 1.6341 | 24.6828 | 11.8257 | 20.3862 | 23.2536 | 18.9997 |
| 1.728 | 2.09 | 150000 | 1.6328 | 24.6697 | 11.851 | 20.3943 | 23.2738 | 18.9993 |
| 1.7201 | 2.16 | 155000 | 1.6309 | 24.7364 | 11.8505 | 20.365 | 23.2885 | 18.9992 |
| 1.7233 | 2.23 | 160000 | 1.6346 | 24.7298 | 12.0026 | 20.4444 | 23.3156 | 18.9999 |
| 1.7096 | 2.3 | 165000 | 1.6253 | 24.6443 | 11.9004 | 20.4138 | 23.2583 | 18.9999 |
| 1.7084 | 2.37 | 170000 | 1.6233 | 24.6688 | 11.8885 | 20.3623 | 23.2608 | 18.9996 |
| 1.7236 | 2.44 | 175000 | 1.6243 | 24.7174 | 11.8924 | 20.4012 | 23.2948 | 18.9996 |
| 1.7108 | 2.51 | 180000 | 1.6188 | 24.6013 | 11.8153 | 20.2969 | 23.1867 | 18.9997 |
| 1.711 | 2.58 | 185000 | 1.6125 | 24.7673 | 11.8646 | 20.3805 | 23.3114 | 18.9997 |
| 1.7108 | 2.65 | 190000 | 1.6101 | 24.8047 | 11.9763 | 20.494 | 23.3873 | 18.9998 |
| 1.7114 | 2.72 | 195000 | 1.6123 | 24.7019 | 11.9201 | 20.414 | 23.2823 | 18.9999 |
| 1.7004 | 2.79 | 200000 | 1.6083 | 24.7525 | 11.9197 | 20.4581 | 23.3371 | 18.9999 |
| 1.7104 | 2.86 | 205000 | 1.6061 | 24.7057 | 11.8818 | 20.4017 | 23.286 | 18.9999 |
| 1.7063 | 2.93 | 210000 | 1.6063 | 24.7707 | 11.934 | 20.4473 | 23.3316 | 18.9999 |
| 1.7039 | 3.0 | 215000 | 1.6070 | 24.7696 | 11.9467 | 20.4495 | 23.3341 | 18.9999 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
birgermoell/psst-augmented | a4caf8a194250910a966f5168a830b3b16ab5bf0 | 2022-04-05T08:42:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-augmented | 3 | null | transformers | 22,154 | Entry not found |
justinlyli/fyp_pegasus_cnndailymail | b9bf476ee3948b711de07949533820f95b3f92ea | 2022-04-05T10:55:53.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | justinlyli | null | justinlyli/fyp_pegasus_cnndailymail | 3 | null | transformers | 22,155 | Entry not found |
AnonymousSub/fpdm_triplet_bert_FT_new_newsqa | df5090700ebdc18705600a4a6b676bb3bdfe45a1 | 2022-04-05T14:48:58.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/fpdm_triplet_bert_FT_new_newsqa | 3 | null | transformers | 22,156 | Entry not found |
BigSalmon/InformalToFormalLincolnConciseWordy | 931903c41b16e153c82bf78e0254f55884b2a61f | 2022-04-05T15:21:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincolnConciseWordy | 3 | null | transformers | 22,157 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincolnConciseWordy")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincolnConciseWordy")
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
Keywords to sentences or sentence. |
spencer/wav2vec2-base-960h | 3eae3450fb592f2b04729638bdc25885cbf8ed6e | 2022-04-09T19:18:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | spencer | null | spencer/wav2vec2-base-960h | 3 | null | transformers | 22,158 | Entry not found |
linhthi/fake-news-detector-bert-v1.0 | 4b8529fa4406e5440413a460d4ef4f729c27f8bc | 2022-04-06T07:04:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | linhthi | null | linhthi/fake-news-detector-bert-v1.0 | 3 | null | transformers | 22,159 | Entry not found |
chiba/distilbert-base-japanese_test | fd5b5bf7e56b536ec9e5d2b05ad59bb3f6301494 | 2022-04-08T06:17:25.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | chiba | null | chiba/distilbert-base-japanese_test | 3 | null | transformers | 22,160 | Entry not found |
nealmgkr/bert-base-uncased-tminer-hs | 8c2440dca4c897e2996619e7f0519c72f70c4b3c | 2022-04-06T08:58:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | nealmgkr | null | nealmgkr/bert-base-uncased-tminer-hs | 3 | null | transformers | 22,161 | Entry not found |
birgermoell/psst-fairseq-gaussian | ec6ea9e7172e7ae76fc2abd2a22b7626c688148b | 2022-04-06T09:01:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-fairseq-gaussian | 3 | null | transformers | 22,162 | Entry not found |
ankitkupadhyay/bert-finetuned-squad | 395295d7a50a97c5c988f65682cd365093b5c6e0 | 2022-04-06T18:38:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | ankitkupadhyay | null | ankitkupadhyay/bert-finetuned-squad | 3 | 1 | transformers | 22,163 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
moshew/distilbert-base-uncased-finetuned-clinc | e71fb15efc26d201ee404857eff36ce05d9a28be | 2022-04-06T15:38:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | moshew | null | moshew/distilbert-base-uncased-finetuned-clinc | 3 | null | transformers | 22,164 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9187096774193548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7703
- Accuracy: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 |
| 2.6309 | 2.0 | 636 | 1.8797 | 0.8310 |
| 1.5443 | 3.0 | 954 | 1.1537 | 0.8974 |
| 1.0097 | 4.0 | 1272 | 0.8560 | 0.9135 |
| 0.7918 | 5.0 | 1590 | 0.7703 | 0.9187 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
arampacha/electra-base-inqg-span | b7f8ce7e93a277c431f74813fa6af0d1485757e6 | 2022-04-06T17:21:08.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | arampacha | null | arampacha/electra-base-inqg-span | 3 | null | transformers | 22,165 | Entry not found |
raileymontalan/distilbert-base-cased-finetuned-fake-news-detection | 4230a0a30f68fb9cef958ce2756e1fc91cf6b285 | 2022-04-06T18:38:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | raileymontalan | null | raileymontalan/distilbert-base-cased-finetuned-fake-news-detection | 3 | null | transformers | 22,166 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-cased-finetuned-fake-news-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-fake-news-detection
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0043
- F1: 0.9996
- Accuracy: 0.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 1684 | 0.0043 | 0.9993 | 0.9993 |
| No log | 2.0 | 3368 | 0.0043 | 0.9996 | 0.9996 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
frankxu/gpt-neo-125M-code | 82b5df98b0ecea00b1104b2307e1648f519d528b | 2022-04-13T18:24:14.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | frankxu | null | frankxu/gpt-neo-125M-code | 3 | null | transformers | 22,167 | Entry not found |
birgermoell/psst-fairseq-combined-augmented | 4fbc363267811839ecbad882e78a2f96c1ba1f6a | 2022-04-07T08:25:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-fairseq-combined-augmented | 3 | null | transformers | 22,168 | Entry not found |
luffycodes/roberta-base-mrpc | ed7af87d8d45b654de5cbac55ea47d2ca7ad86af | 2022-04-07T19:24:44.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/roberta-base-mrpc | 3 | null | transformers | 22,169 | Entry not found |
dennishe97/longformer-code-mlm-v2 | 3fadb92ba64e5f735b4c17066fbd15faaba359be | 2022-04-09T06:08:06.000Z | [
"pytorch",
"longformer",
"feature-extraction",
"transformers"
] | feature-extraction | false | dennishe97 | null | dennishe97/longformer-code-mlm-v2 | 3 | null | transformers | 22,170 | Entry not found |
chiba/bert-base-japanese-whole-word-masking_test | cac4b961f28a64a9d5386dc48b31db90d6255a5f | 2022-04-12T07:24:23.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | chiba | null | chiba/bert-base-japanese-whole-word-masking_test | 3 | null | transformers | 22,171 | Entry not found |
Annas/the-world-machine-3 | 4285a40d54be23ea148ada0ec0a574e34d2ef87d | 2022-04-08T14:34:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Annas | null | Annas/the-world-machine-3 | 3 | 1 | transformers | 22,172 | trained openai gpt2 using data crawled by gwitr |
projecte-aina/mbert-base-gencata | d30cafd44b9f1a83053a5ffcb152983fc11ab43a | 2022-07-27T10:55:38.000Z | [
"pytorch",
"bert",
"text-classification",
"ca",
"dataset:projecte-aina/gencata",
"transformers",
"text classification",
"license:mit"
] | text-classification | false | projecte-aina | null | projecte-aina/mbert-base-gencata | 3 | null | transformers | 22,173 | ---
language: "ca"
license: mit
tags:
- text classification
task_categories:
- text-scoring
task_ids:
- semantic-similarity-scoring
datasets:
- projecte-aina/gencata
inference: false
---
## mBERT fine-tuned on the GEnCaTa dataset for Parallel Corpus Filtering
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Funding](#funding)
## Model description
We fine-tuned [mBERT](https://huggingface.co/bert-base-multilingual-cased) for the task of Catalan-English Parallel Corpus Filtering with the [GEnCaTa](https://huggingface.co/datasets/projecte-aina/gencata) dataset.
The model has been fine-tuned on general domain data and is expected to work best with that type of text.
## Intended Uses and Limitations
You can use this model for parallel corpus fitering, also known as, sentence alignment filtering.
## How to Use
Here is how to use this model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import pipeline
filterer = pipeline("text-classification", model="projecte-aina/mbert-base-gencata")
ca = '- El vostre vehicle quedi immobilitzat per l'aigua'
en = 'You must leave your car and head for higher ground when:'
print(filterer([(ca, en)], max_length=512, truncation=True))
```
## Training
### Training Data
As training data, we used the [GEnCaTa](https://huggingface.co/datasets/projecte-aina/gencata) dataset, a Catalan-English dataset annotated for Parallel Corpus Filtering for MT. It is extracted from a general domain corpus crawled from the Catalan Government domains and subdomains.
### Training Procedure
#### Tokenization
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) with a vocabulary size of 51,200 tokens.
#### Hyperparameters
| Hyper-parameter | Value |
|------------------------------------|--------|
| Learning Rate | 0.8e-5 |
| Learning Rate Decay | Linear |
| Warmup | 0.06 |
| Batch Size | 64 |
| Weight Decay | 0.01 |
| Max. Training Epochs | 10 |
## Variable and Metrics
Although we can report accuracy scores, the best way to evaluate this model is to filter a parallel corpus and train a Machine Translation system with the filtered data. For that, we train two different MT models and evaluate them on [Flores-101](https://huggingface.co/datasets/gsarti/flores_101) with BLEU scores.
## Evaluation Results
Below the evaluation results on [Flores-101](https://huggingface.co/datasets/gsarti/flores_101) from two MT systems: RAW and FIL (filtered corpus with our model).
|Direction | RAW | FIL |
| -----|-----|------|
|EN > CA | 35.7 | **38.0** |
|CA > EN | 34.7 | **37.6** |
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```
@inproceedings{degibertbonet-EtAl:2022:SIGUL,
abstract = {In this work, we make the case of quality over quantity when training a MT system for a medium-to-low-resource language pair, namely Catalan-English. We compile our training corpus out of existing resources of varying quality and a new high-quality corpus. We also provide new evaluation translation datasets in three different domains. In the process of building Catalan-English parallel resources, we evaluate the impact of drastically filtering alignments in the resulting MT engines. Our results show that even when resources are limited, as in this case, it is worth filtering for quality. We further explore the cross-lingual transfer learning capabilities of the proposed model for parallel corpus filtering by applying it to other languages. All resources generated in this work are released under open license to encourage the development of language technology in Catalan.},
address = {Marseille, France},
author = {{de Gibert Bonet}, Ona and Kharitonova, Ksenia and {Calvo Figueras}, Blanca and Armengol-Estap{\'{e}}, Jordi and Melero, Maite},
booktitle = {Proceedings of the the 1st Annual Meeting of the ELRA/ISCA Special Interest Group on Under-Resourced Languages},
pages = {59--69},
publisher = {European Language Resources Association},
title = {{Quality versus Quantity: Building Catalan-English MT Resources}},
url = {http://www.lrec-conf.org/proceedings/lrec2022/workshops/SIGUL/pdf/2022.sigul-1.8.pdf},
year = {2022}
}
```
## Contributions
[N/A]
## Funding
This work was funded by MT4All CEF project and the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
|
DioLiu/distilroberta-base-Ctrl | 65427ef6f277748edc45c5f87a4f6ae17fef6948 | 2022-04-08T15:48:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DioLiu | null | DioLiu/distilroberta-base-Ctrl | 3 | null | transformers | 22,174 | Entry not found |
akanksha-b14/songs-transcription-2 | 5ac25312640168e4e02b8bc0b3c25a1992bf4614 | 2022-04-09T02:32:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | akanksha-b14 | null | akanksha-b14/songs-transcription-2 | 3 | null | transformers | 22,175 | Entry not found |
nepp1d0/SingleBertSmilesTargetInteraction | 9d9928941cc59986b2c048d8de5f9803e5192cae | 2022-04-10T18:55:03.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | nepp1d0 | null | nepp1d0/SingleBertSmilesTargetInteraction | 3 | null | transformers | 22,176 | Prot_bert finetuned on GPCR_train dataset of Drug Target prediction
Trainig paramenters:
overwrite_output_dir=True,
evaluation_strategy="epoch",
learning_rate=1e-3,
weight_decay=0.001,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
push_to_hub=True,
fp16=True,
logging_steps=logging_steps,
save_strategy='epoch',
num_train_epochs=2 |
davidcheungo123/pegasus-samsum | 08a47e90502d5ca014023f4f51aed1974bf13750 | 2022-04-09T15:44:09.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | davidcheungo123 | null | davidcheungo123/pegasus-samsum | 3 | null | transformers | 22,177 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6936 | 0.54 | 500 | 1.4844 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anton-l/xtreme_s_xlsr_300m_fleurs_asr_test | d1ef95cc9c7d3218b74919adac692d4539c43e30 | 2022-04-10T10:14:02.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | anton-l | null | anton-l/xtreme_s_xlsr_300m_fleurs_asr_test | 3 | null | transformers | 22,178 | Entry not found |
vaariis/distilbert-base-uncased-finetuned-emotion | 8a286e01d1928a761aad5754a8dd162e7532e31d | 2022-04-21T06:20:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | vaariis | null | vaariis/distilbert-base-uncased-finetuned-emotion | 3 | null | transformers | 22,179 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2218
- Accuracy: 0.9205
- F1: 0.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8262 | 1.0 | 250 | 0.3223 | 0.9005 | 0.8971 |
| 0.2474 | 2.0 | 500 | 0.2218 | 0.9205 | 0.9208 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.12.1
|
Brendan/random-in-domain-5-demos-t5-small | f04b6a13b92a809f8ab98d25528345c4449f750d | 2022-04-11T19:44:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Brendan | null | Brendan/random-in-domain-5-demos-t5-small | 3 | null | transformers | 22,180 | Entry not found |
baikal/bert-wp30 | 69a763d63004a2d05e0106f72d0ae72c11dc0f85 | 2022-04-11T01:44:58.000Z | [
"pytorch",
"ko",
"dataset:한국어 위키",
"dataset:국립국어원 문어/뉴스 데이터셋",
"transformers"
] | null | false | baikal | null | baikal/bert-wp30 | 3 | null | transformers | 22,181 | ---
language: ko
datasets:
- 한국어 위키
- 국립국어원 문어/뉴스 데이터셋
---
baikal-BERT-base
---
- model: bert-base
- vocab: bert-wordpiece, 30,000
- version: latest
|
Splend1dchan/XDBERT-base | c52826515cc87e50b523f5f345d6290aa990491a | 2022-04-11T03:56:29.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Splend1dchan | null | Splend1dchan/XDBERT-base | 3 | null | transformers | 22,182 | Entry not found |
philschmid/minilm-l12-h384-sst2-distilled | 82ceba105fa40282ef977de2ca97832d437af47f | 2022-04-11T08:39:58.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | philschmid | null | philschmid/minilm-l12-h384-sst2-distilled | 3 | null | transformers | 22,183 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: minilm-l12-h384-sst2-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9220183486238532
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# minilm-l12-h384-sst2-distilled
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5417
- Accuracy: 0.9220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001400785945474408
- train_batch_size: 512
- eval_batch_size: 512
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2689 | 1.0 | 132 | 0.7102 | 0.8979 |
| 0.8295 | 2.0 | 264 | 0.5669 | 0.9117 |
| 0.5059 | 3.0 | 396 | 0.5545 | 0.9220 |
| 0.3722 | 4.0 | 528 | 0.5378 | 0.9209 |
| 0.2924 | 5.0 | 660 | 0.5417 | 0.9220 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
maretamasaeva/thesis-freeform | ece4535833435668e87ca3843551615e7c936c71 | 2022-04-11T09:42:15.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | maretamasaeva | null | maretamasaeva/thesis-freeform | 3 | null | transformers | 22,184 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: thesis-freeform
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thesis-freeform
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6933
- Accuracy: 0.4636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6922 | 1.0 | 5684 | 0.6928 | 0.4636 |
| 0.6946 | 2.0 | 11368 | 0.6918 | 0.4636 |
| 0.692 | 3.0 | 17052 | 0.6949 | 0.4636 |
| 0.6901 | 4.0 | 22736 | 0.6933 | 0.4636 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
openclimatefix/nowcasting_cnn | 4757ebc59ba1e3ed2d61c042e504237a2e303c79 | 2022-05-19T10:53:53.000Z | [
"pytorch",
"transformers",
"nowcasting",
"forecasting",
"timeseries",
"remote-sensing",
"license:mit"
] | null | false | openclimatefix | null | openclimatefix/nowcasting_cnn | 3 | null | transformers | 22,185 | ---
license: mit
tags:
- nowcasting
- forecasting
- timeseries
- remote-sensing
---
# Nowcasting CNN
## Model description
3d conv model, that takes in different data streams
architecture is roughly
1. satellite image time series goes into many 3d convolution layers.
2. nwp time series goes into many 3d convolution layers.
3. Final convolutional layer goes to full connected layer. This is joined by
other data inputs like
- pv yield
- time variables
Then there ~4 fully connected layers which end up forecasting the
pv yield / gsp into the future
## Intended uses & limitations
Forecasting short term PV power for different regions and nationally in the UK
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
Training data is EUMETSAT RSS imagery over the UK, on-the-ground PV data, and NWP predictions.
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
Kuray107/ls-timit-100percent-supervised-meta | 496059c09cbf74375ed59ce4303ea02ed86b8f0b | 2022-04-11T19:44:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/ls-timit-100percent-supervised-meta | 3 | null | transformers | 22,186 | ---
tags:
- generated_from_trainer
model-index:
- name: ls-timit-100percent-supervised-meta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ls-timit-100percent-supervised-meta
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0649
- Wer: 0.0253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0964 | 7.04 | 1000 | 0.0706 | 0.0342 |
| 0.0445 | 14.08 | 2000 | 0.0649 | 0.0253 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
LysandreJik/my-new-model | 84ec09ca8500f4eea798094d46832bd6fbda047b | 2022-04-11T21:24:36.000Z | [
"pytorch",
"transformers"
] | null | false | LysandreJik | null | LysandreJik/my-new-model | 3 | null | transformers | 22,187 | Entry not found |
rajiv003/ernie-finetuned-qqp | cf05cd342bcf0ad77b58b9cee03757bbd23e8e67 | 2022-04-12T11:47:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | rajiv003 | null | rajiv003/ernie-finetuned-qqp | 3 | null | transformers | 22,188 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: ernie-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.9156566905763047
- name: F1
type: f1
value: 0.8860522622468757
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ernie-finetuned-qqp
This model is a fine-tuned version of [nghuyong/ernie-2.0-en](https://huggingface.co/nghuyong/ernie-2.0-en) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4381
- Accuracy: 0.9157
- F1: 0.8861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|
| 0.2522 | 1.0 | 22741 | 0.2505 | 0.8997 | 0.8633 |
| 0.1903 | 2.0 | 45482 | 0.2645 | 0.9071 | 0.8761 |
| 0.1599 | 3.0 | 68223 | 0.2986 | 0.9115 | 0.8816 |
| 0.1214 | 4.0 | 90964 | 0.3640 | 0.9133 | 0.8828 |
| 0.0809 | 5.0 | 113705 | 0.4381 | 0.9157 | 0.8861 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Pavithra/codeparrot-ds-500sample-gpt-neo-2ep | 17fcc6414830f916a6d126e5bd70e67fe5fed850 | 2022-04-13T05:43:26.000Z | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Pavithra | null | Pavithra/codeparrot-ds-500sample-gpt-neo-2ep | 3 | null | transformers | 22,189 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-500sample-gpt-neo-2ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-500sample-gpt-neo-2ep
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.5248 | 0.19 | 1000 | 2.9757 |
| 2.5422 | 0.37 | 2000 | 2.4397 |
| 2.1642 | 0.56 | 3000 | 2.1880 |
| 1.9135 | 0.74 | 4000 | 1.9884 |
| 1.7236 | 0.93 | 5000 | 1.8470 |
| 1.5459 | 1.11 | 6000 | 1.7501 |
| 1.4363 | 1.3 | 7000 | 1.6761 |
| 1.3639 | 1.49 | 8000 | 1.6105 |
| 1.3046 | 1.67 | 9000 | 1.5667 |
| 1.273 | 1.86 | 10000 | 1.5483 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ali-issa/lebanese | 2b767da43ab5a9bcc443c13855abc743ea2962e8 | 2022-04-12T08:13:32.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ali-issa | null | ali-issa/lebanese | 3 | null | transformers | 22,190 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-lebanese-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-lebanese-epoch
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1662
- Wer: 0.8306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.5946 | 2.5 | 50 | 5.0090 | 1.0 |
| 4.0559 | 5.0 | 100 | 3.2772 | 1.0 |
| 3.153 | 7.5 | 150 | 2.9716 | 1.0 |
| 2.9739 | 10.0 | 200 | 2.9512 | 1.0 |
| 2.93 | 12.5 | 250 | 2.9072 | 1.0 |
| 2.5458 | 15.0 | 300 | 1.8472 | 0.9987 |
| 1.3716 | 17.5 | 350 | 1.2279 | 0.8588 |
| 0.8123 | 20.0 | 400 | 1.1662 | 0.8306 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jegormeister/Multilingual-MiniLM-L12-H384-mmarco-finetuned | 9d3df2e0ebcb096a40d86b5c270afdfbd2cd8a4c | 2022-04-12T07:26:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jegormeister | null | jegormeister/Multilingual-MiniLM-L12-H384-mmarco-finetuned | 3 | null | transformers | 22,191 | Entry not found |
cestwc/roberta-base-emb | e0af59b8bf2461f80b19d2ceef9d44d6ef6735f2 | 2022-06-02T10:25:58.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | cestwc | null | cestwc/roberta-base-emb | 3 | null | transformers | 22,192 | Entry not found |
CenIA/bert-base-spanish-wwm-cased-finetuned-qa-sqac | 7f9a0f22def029d1a58b9b544835556506398108 | 2022-04-13T13:30:56.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/bert-base-spanish-wwm-cased-finetuned-qa-sqac | 3 | null | transformers | 22,193 | Entry not found |
eagles/focus_sum | 02eaccae435844a61ddda42bffa13ad50f5595c5 | 2022-04-14T04:26:44.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | eagles | null | eagles/focus_sum | 3 | null | transformers | 22,194 | ---
tags:
- generated_from_trainer
model-index:
- name: focus_sum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# focus_sum
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9644 | 3.75 | 500 | 0.6880 |
| 0.4682 | 7.52 | 1000 | 0.4350 |
| 0.4672 | 11.28 | 1500 | 0.2599 |
| 0.3439 | 15.04 | 2000 | 0.1568 |
| 0.2753 | 18.79 | 2500 | 0.1064 |
| 0.1885 | 22.55 | 3000 | 0.0737 |
| 0.2185 | 26.31 | 3500 | 0.0575 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
CenIA/albert-xxlarge-spanish-finetuned-qa-sqac | c48255be06ce61acc753df2fc80ac6f49265e87d | 2022-04-13T13:50:11.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-xxlarge-spanish-finetuned-qa-sqac | 3 | null | transformers | 22,195 | Entry not found |
potatobunny/results-yelp | a630e70a3a5e2feb18163090de65666217d87562 | 2022-04-13T15:36:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | potatobunny | null | potatobunny/results-yelp | 3 | null | transformers | 22,196 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: results-yelp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results-yelp
This model is a fine-tuned version of [textattack/bert-base-uncased-yelp-polarity](https://huggingface.co/textattack/bert-base-uncased-yelp-polarity) on a filtered and manually reviewed Yelp dataset containing restaurant reviews only.
It achieves the following results on the evaluation set:
- Loss: 0.3563
- Accuracy: 0.9302
- Precision: 0.9461
- Recall: 0.9608
- F1: 0.9534
Note: to use this tokenizer, please use the following code to load all the required files:
`tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", config=AutoConfig.from_pretrained("potatobunny/results-yelp"))`
## Model description
This model is fine-tuned on a Yelp dataset with labelled data containing a restaurant review (text) and whether it has a positive (1) or negative (0) sentiment.
## Intended uses & limitations
This is intended to perform text classification, specifically sentiment analysis, on text data obtained from restaurant reviews to determine if the particular review is positive or negative.
## Training and evaluation data
The training and evaluation data were both obtained from the same Yelp dataset. The data was split into 70% training and 30% validation.
<!-- ## Training procedure -->
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
The training loss obtained was 0.265741667.
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
htufgg/roberta-finetuned-CPV_Spanish | 36c612bda40ba02fa1481d00df68157cba5f4fa3 | 2022-04-14T09:01:23.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | htufgg | null | htufgg/roberta-finetuned-CPV_Spanish | 3 | null | transformers | 22,197 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: roberta-finetuned-CPV_Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-CPV_Spanish
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0422
- F1: 0.7739
- Roc Auc: 0.8704
- Accuracy: 0.7201
- Coverage Error: 11.5798
- Label Ranking Average Precision Score: 0.7742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Coverage Error | Label Ranking Average Precision Score |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|:--------------:|:-------------------------------------:|
| 0.0579 | 1.0 | 2039 | 0.0548 | 0.6327 | 0.7485 | 0.5274 | 21.7879 | 0.5591 |
| 0.0411 | 2.0 | 4078 | 0.0441 | 0.7108 | 0.8027 | 0.6386 | 16.8647 | 0.6732 |
| 0.0294 | 3.0 | 6117 | 0.0398 | 0.7437 | 0.8295 | 0.6857 | 14.6700 | 0.7249 |
| 0.0223 | 4.0 | 8156 | 0.0389 | 0.7568 | 0.8453 | 0.7056 | 13.3552 | 0.7494 |
| 0.0163 | 5.0 | 10195 | 0.0397 | 0.7626 | 0.8569 | 0.7097 | 12.5895 | 0.7620 |
| 0.0132 | 6.0 | 12234 | 0.0395 | 0.7686 | 0.8620 | 0.7126 | 12.1926 | 0.7656 |
| 0.0095 | 7.0 | 14273 | 0.0409 | 0.7669 | 0.8694 | 0.7109 | 11.5978 | 0.7700 |
| 0.0066 | 8.0 | 16312 | 0.0415 | 0.7705 | 0.8726 | 0.7107 | 11.4252 | 0.7714 |
| 0.0055 | 9.0 | 18351 | 0.0417 | 0.7720 | 0.8689 | 0.7163 | 11.6987 | 0.7716 |
| 0.0045 | 10.0 | 20390 | 0.0422 | 0.7739 | 0.8704 | 0.7201 | 11.5798 | 0.7742 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
QuickRead/PPO-policy_v2 | 5e671dda61e723d7a208482900f866522ec8d7d6 | 2022-04-14T23:56:45.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | QuickRead | null | QuickRead/PPO-policy_v2 | 3 | null | transformers | 22,198 | Entry not found |
nepp1d0/SingleBertModel-ProtBertfinetuned-smilesBindingDB | 3a7e1669ea0948f3ae55436308b201ebe3f6339a | 2022-04-29T12:23:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | nepp1d0 | null | nepp1d0/SingleBertModel-ProtBertfinetuned-smilesBindingDB | 3 | null | transformers | 22,199 | ---
tags:
- generated_from_trainer
model-index:
- name: SingleBertModel-ProtBertfinetuned-smilesBindingDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SingleBertModel-ProtBertfinetuned-smilesBindingDB
This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5245 | 1.0 | 10000 | nan |
| 2.5037 | 2.0 | 20000 | nan |
| 2.4967 | 3.0 | 30000 | nan |
| 2.4983 | 4.0 | 40000 | nan |
| 2.4926 | 5.0 | 50000 | nan |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.