modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
moussaKam/tiny_bert-base_bert-score | 28875f2532114328634580eb1ccfd76e74ed877b | 2021-11-26T14:53:19.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | moussaKam | null | moussaKam/tiny_bert-base_bert-score | 2 | null | transformers | 24,500 | Entry not found |
mptrigo/run1 | af248e51b7a75a2891cf561cc9999fcdfd4df258 | 2022-01-20T10:37:49.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | mptrigo | null | mptrigo/run1 | 2 | null | transformers | 24,501 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model_index:
- name: run1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metric:
name: Bleu
type: bleu
value: 8.4217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-es](https://huggingface.co/Helsinki-NLP/opus-mt-es-es) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1740
- Bleu: 8.4217
- Gen Len: 15.9457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 250 | 4.2342 | 0.8889 | 83.4022 |
| 4.6818 | 2.0 | 500 | 3.7009 | 4.1671 | 35.587 |
| 4.6818 | 3.0 | 750 | 3.4737 | 7.6414 | 23.9674 |
| 3.4911 | 4.0 | 1000 | 3.3713 | 7.7512 | 18.6957 |
| 3.4911 | 5.0 | 1250 | 3.2689 | 8.0901 | 19.4674 |
| 3.0164 | 6.0 | 1500 | 3.2194 | 8.5708 | 25.0543 |
| 3.0164 | 7.0 | 1750 | 3.1853 | 9.5275 | 23.9239 |
| 2.6954 | 8.0 | 2000 | 3.1562 | 8.5635 | 18.9674 |
| 2.6954 | 9.0 | 2250 | 3.1564 | 8.2031 | 17.5978 |
| 2.4503 | 10.0 | 2500 | 3.1314 | 8.5638 | 18.1522 |
| 2.4503 | 11.0 | 2750 | 3.1511 | 8.8428 | 17.913 |
| 2.2554 | 12.0 | 3000 | 3.1513 | 8.1244 | 17.0 |
| 2.2554 | 13.0 | 3250 | 3.1664 | 8.0157 | 16.2717 |
| 2.1202 | 14.0 | 3500 | 3.1656 | 8.7758 | 16.6087 |
| 2.1202 | 15.0 | 3750 | 3.1550 | 8.4637 | 16.4565 |
| 2.0082 | 16.0 | 4000 | 3.1702 | 8.2488 | 15.8587 |
| 2.0082 | 17.0 | 4250 | 3.1725 | 8.609 | 16.3043 |
| 1.9274 | 18.0 | 4500 | 3.1750 | 8.4476 | 15.8043 |
| 1.9274 | 19.0 | 4750 | 3.1734 | 8.4753 | 16.5543 |
| 1.888 | 20.0 | 5000 | 3.1740 | 8.4217 | 15.9457 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.1.dev0
- Tokenizers 0.10.3
|
mrm8488/deberta-v3-small-finetuned-squad | 46b3f081f0213991682a70a54f08a04e9900b576 | 2021-11-21T21:14:48.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/deberta-v3-small-finetuned-squad | 2 | 1 | transformers | 24,502 | Entry not found |
mrm8488/electra-base-finetuned-squadv1 | 573e287d1cb0bb8ffc70008975154131a11aba0c | 2020-12-11T21:53:55.000Z | [
"pytorch",
"electra",
"question-answering",
"en",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/electra-base-finetuned-squadv1 | 2 | null | transformers | 24,503 | ---
language: en
---
# Electra base ⚡ + SQuAD v1 ❓
[Electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
## Details of the downstream task (Q&A) - Dataset 📚
**S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type electra \
--model_name_or_path 'google/electra-base-discriminator' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/train-v1.1.json' \
--predict_file '/content/dataset/dev-v1.1.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir '/content/output' \
--overwrite_output_dir \
--save_steps 1000
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **83.03** |
| **F1** | **90.77** |
| **Size**| **+ 400 MB** |
Very good metrics for such a "small" model!
```json
{
'exact': 83.03689687795648,
'f1': 90.77486052446231,
'total': 10570,
'HasAns_exact': 83.03689687795648,
'HasAns_f1': 90.77486052446231,
'HasAns_total': 10570,
'best_exact': 83.03689687795648,
'best_exact_thresh': 0.0,
'best_f1': 90.77486052446231,
'best_f1_thresh': 0.0
}
```
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-base-finetuned-squadv1')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'What has been discovered by scientists from China ?'
})
# Output:
{'answer': 'A new strain of flu', 'end': 19, 'score': 0.9995211430099182, 'start': 0}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/electricidad-small-finetuned-muchocine | 0440889050df65424f766b01841b35f4f53b3468 | 2021-01-09T04:46:14.000Z | [
"pytorch",
"electra",
"text-classification",
"es",
"dataset:muchocine",
"transformers",
"sentiment",
"analysis",
"spanish"
] | text-classification | false | mrm8488 | null | mrm8488/electricidad-small-finetuned-muchocine | 2 | 2 | transformers | 24,504 | ---
language: es
datasets:
- muchocine
widget:
- text: "Una buena película, sin más."
tags:
- sentiment
- analysis
- spanish
---
# Electricidad-small fine-tuned for (Spanish) Sentiment Anlalysis 🎞️👍👎
[Electricidad](https://huggingface.co/mrm8488/electricidad-small-discriminator) small fine-tuned on [muchocine](https://huggingface.co/datasets/muchocine) dataset for Spanish **Sentiment Analysis** downstream task.
## Fast usage with `pipelines` 🚀
```python
# pip install -q transformers
from transformers import AutoModelForSequenceClassification, AutoTokenizer
CHKPT = 'mrm8488/electricidad-small-finetuned-muchocine'
model = AutoModelForSequenceClassification.from_pretrained(CHKPT)
tokenizer = AutoTokenizer.from_pretrained(CHKPT)
from transformers import pipeline
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
# It ranks your comments between 1 and 5 (stars)
classifier('Es una obra mestra. Brillante.')
classifier('Es una película muy buena.')
classifier('Una buena película, sin más.')
classifier('Esperaba mucho más.')
classifier('He tirado el dinero. Una basura. Vergonzoso.')
``` |
mrm8488/roberta-base-bne-finetuned-sqac | 19436e6c5157b575ce1741d58d9fc6cd349ab2c9 | 2021-10-05T15:03:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"es",
"dataset:sqac",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/roberta-base-bne-finetuned-sqac | 2 | 1 | transformers | 24,505 | ---
language: es
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sqac
metrics:
- f1
model-index:
- name: roberta-base-bne-finetuned-sqac
results:
- task:
name: Question Answering
type: Question-Answering
dataset:
name: sqac
type: sqac
args:
metrics:
- name: f1
type: f1
value: 0.7903
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-sqac
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9971 | 1.0 | 1196 | 0.8646 |
| 0.482 | 2.0 | 2392 | 0.9334 |
| 0.1652 | 3.0 | 3588 | 1.2111 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
mrm8488/squeezebert-finetuned-squadv1 | a0f93afaaf3809fbb2a4bec319567025209094e7 | 2020-12-11T21:55:22.000Z | [
"pytorch",
"squeezebert",
"question-answering",
"en",
"dataset:squad",
"arxiv:2006.11316",
"arxiv:2004.02984",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/squeezebert-finetuned-squadv1 | 2 | null | transformers | 24,506 | ---
language: en
datasets:
- squad
---
# SqueezeBERT + SQuAD (v1.1)
[squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased) fine-tuned on [SQUAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task.
## Details of SqueezeBERT
This model, `squeezebert-uncased`, is a pretrained model for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective.
SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/).
The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone.
More about the model [here](https://arxiv.org/abs/2004.02984)
## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓
**S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python /content/transformers/examples/question-answering/run_squad.py \
--model_type bert \
--model_name_or_path squeezebert/squeezebert-uncased \
--do_eval \
--do_train \
--do_lower_case \
--train_file /content/dataset/train-v1.1.json \
--predict_file /content/dataset/dev-v1.1.json \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 15 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/output_dir \
--overwrite_output_dir \
--save_steps 2000
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **76.66** |
| **F1** | **85.83** |
Model Size: **195 MB**
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/squeezebert-finetuned-squadv1')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'Who did identified it ?'
})
# Output: {'answer': 'scientists.', 'end': 106, 'score': 0.6988425850868225, 'start': 96}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/t5-base-finetuned-math-linear-algebra-1d | b7823a22b3b04166a2f3fd3c63d8dac3e9161250 | 2020-08-18T17:40:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-math-linear-algebra-1d | 2 | 1 | transformers | 24,507 | Entry not found |
mrm8488/t5-base-finetuned-quarel | 314ba74577d9e1f1bc37e798b11ff59a4e9d04ab | 2021-06-23T12:55:25.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:quarel",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-quarel | 2 | null | transformers | 24,508 | ---
language: en
datasets:
- quarel
---
# T5-base fine-tuned on QuaRel
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [QuaRel](https://allenai.org/data/quarel) for **QA** downstream task.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the dataset 📚
**QuaRel**: *[A Dataset and Models for Answering Questions about Qualitative Relationships](https://www.semanticscholar.org/paper/QuaRel%3A-A-Dataset-and-Models-for-Answering-about-Tafjord-Clark/51004bc6461a572e1189a0e3b32b441155d760ce)*
Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, "Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The **context** passed to the *encoder* is the `logical_form_pretty` field (example: `qrel(speed, higher, ice) -> qrel(smoothness, higher, snow) ; qrel(smoothness, higher, ice`) . The **question** is just the `question` field. The **answer** passed to the *decoder* is obtained from `question`using the `answer_index` field. More details about the dataset format/fields [here](https://huggingface.co/nlp/viewer/?dataset=quarel)
## Metrics on validation set 📋
| Metric | Score |
|--------|-------|
|Accuracy (EM) | **67.98**|
## Model in Action 🚀
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-quarel")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-quarel")
def get_response(question, context, max_length=32):
input_text = 'question: %s context: %s' % (question, context)
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=max_length)
return tokenizer.decode(output[0])
question = 'As the train left the station it crossed the bridge and being farther away it looked (A) larger (B) smaller'
context = 'qrel(distance, higher, Train on a bridge) -> qrel(apparentSize, higher, Train on a bridge) ; qrel(apparentSize, lower, Train on a bridge)'
get_response(question, context)
# output: 'smaller'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrp/distilbert-base-uncased-finetuned-imdb | 69859451907bd8f60fa49138f69148c694bc4cc4 | 2022-01-19T08:44:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | mrp | null | mrp/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 24,509 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4883 |
| 2.572 | 2.0 | 314 | 2.4240 |
| 2.5377 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mrp/marian-finetuned-kde4-en-to-fr | ba8bf152cf80a3a18245ee4b74acf2c11bb8645f | 2022-01-20T04:05:30.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | mrp | null | mrp/marian-finetuned-kde4-en-to-fr | 2 | null | transformers | 24,510 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 50.20410659441166
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9643
- Bleu: 50.2041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mudes/en-large | 8cd519f7b74c0e16e57f0667d404401b036186d5 | 2021-05-20T18:36:06.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"en",
"arxiv:2102.09665",
"arxiv:2104.04630",
"transformers",
"mudes",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | mudes | null | mudes/en-large | 2 | null | transformers | 24,511 | ---
language: en
tags:
- mudes
license: apache-2.0
---
# MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans
We provide state-of-the-art models to detect toxic spans in social media texts. We introduce our framework in [this paper](https://arxiv.org/abs/2102.09665). We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). Our participation in the task is detailed in [this paper](https://arxiv.org/abs/2104.04630).
## Usage
You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed:
```bash
pip install mudes
```
Then you can use the model like this:
```python
from mudes.app.mudes_app import MUDESApp
app = MUDESApp("en-large", use_cuda=False)
print(app.predict_toxic_spans("You motherfucking cunt", spans=True))
```
## System Demonstration
An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/).
## Citing & Authors
If you find this model helpful, feel free to cite our publications
```bibtex
@inproceedings{ranasinghemudes,
title={{MUDES: Multilingual Detection of Offensive Spans}},
author={Tharindu Ranasinghe and Marcos Zampieri},
booktitle={Proceedings of NAACL},
year={2021}
}
```
```bibtex
@inproceedings{ranasinghe2021semeval,
title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}},
author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex},
booktitle={Proceedings of SemEval},
year={2021}
}
``` |
mwesner/layoutlmv2-cord | e1790bc03c38ad9cc697eba0d1b54fa280eb69e5 | 2022-02-23T19:46:50.000Z | [
"pytorch",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mwesner | null | mwesner/layoutlmv2-cord | 2 | null | transformers | 24,512 | Entry not found |
namanrana16/DialoGPT-small-House | 9ffff14c4a7c65c0aa6c1aee7a116b08ce1ed773 | 2021-11-11T08:23:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"huggingtweets"
] | text-generation | false | namanrana16 | null | namanrana16/DialoGPT-small-House | 2 | null | transformers | 24,513 | ---
tags:
- huggingtweets
widget:
- text: ""
---
#House BOT |
napsternxg/scibert_scivocab_uncased_ft_mlm_SDU21_AI | 4a1376270e4a74da1b9089be5c2a61c25eecd859 | 2021-05-20T01:10:55.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | napsternxg | null | napsternxg/scibert_scivocab_uncased_ft_mlm_SDU21_AI | 2 | null | transformers | 24,514 | scibert_scivocab_uncased_ft_mlm MLM pretrained on SDU21 Task 1 + 2
|
naram92/distilgpt2-finetuned-wikitext2 | 27a9784133b5b81ce487a232e884337fc11525c8 | 2021-10-01T21:00:11.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | naram92 | null | naram92/distilgpt2-finetuned-wikitext2 | 2 | null | transformers | 24,515 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
naram92/distilroberta-base-finetuned-wikitext2 | a836137a8032c30f19af8575dbc8ad300cb07be0 | 2021-10-04T19:49:45.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | naram92 | null | naram92/distilroberta-base-finetuned-wikitext2 | 2 | null | transformers | 24,516 | Entry not found |
nateraw/custom-torch-model | 501ae06b1969e4cdf22281fe4ccf387e443fff29 | 2021-07-06T08:33:17.000Z | [
"pytorch",
"transformers"
] | null | false | nateraw | null | nateraw/custom-torch-model | 2 | null | transformers | 24,517 | Entry not found |
nateraw/my-cool-timm-model | d42d91024b87800f22d62e35d41c922d37a1cb02 | 2021-11-15T19:55:45.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nateraw | null | nateraw/my-cool-timm-model | 2 | null | timm | 24,518 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for my-cool-timm-model |
nateraw/resnet152 | f1c3635599733f4f7470a578d2fc309495e477ab | 2021-04-13T10:00:38.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | nateraw | null | nateraw/resnet152 | 2 | null | transformers | 24,519 | Entry not found |
nates-test-org/cait_xs24_384 | 25b09eae1f0b081f4b9c25a061cd60e3c6d30ffc | 2021-10-29T04:31:21.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/cait_xs24_384 | 2 | null | timm | 24,520 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for cait_xs24_384 |
nates-test-org/cait_xxs36_224 | 0f8f7abc22c35d2b60b36f814e421a4dea3cf6b6 | 2021-10-29T04:34:40.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/cait_xxs36_224 | 2 | null | timm | 24,521 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for cait_xxs36_224 |
nates-test-org/coat_lite_tiny | 6a2e8fac0ed1879bc37179823919cfe859c7d361 | 2021-10-29T04:38:49.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/coat_lite_tiny | 2 | null | timm | 24,522 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for coat_lite_tiny |
nates-test-org/coat_tiny | 2bc6f50e015855cf1b19217f73d2e03847b183f6 | 2021-10-29T04:40:00.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/coat_tiny | 2 | null | timm | 24,523 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for coat_tiny |
navteca/qnli-electra-base | 807250d39ecddc9c39733c97bb0a9fc34d8f154f | 2021-03-25T15:53:55.000Z | [
"pytorch",
"electra",
"text-classification",
"en",
"arxiv:1804.07461",
"sentence-transformers",
"license:mit"
] | text-classification | false | navteca | null | navteca/qnli-electra-base | 2 | null | sentence-transformers | 24,524 | ---
language: en
license: mit
pipeline_tag: text-classification
tags:
- sentence-transformers
---
# Cross-Encoder for QNLI
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [electra-base](https://huggingface.co/google/electra-base-discriminator).
## Training Data
Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the [GLUE QNLI](https://arxiv.org/abs/1804.07461) dataset, which transformed the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/) into an NLI task.
## Usage and Performance
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')])
print(scores)
```
|
nazareno/bertimbau-socioambiental | c5d3266f23669acbd267ae18e05256207b51a260 | 2021-09-16T19:11:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | nazareno | null | nazareno/bertimbau-socioambiental | 2 | null | transformers | 24,525 | Entry not found |
ncduy/distilbert-base-cased-distilled-squad-finetuned-squad-small | 49c474d80c6045a4a4092b994850f41dd5e01d2a | 2021-12-09T12:41:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | ncduy | null | ncduy/distilbert-base-cased-distilled-squad-finetuned-squad-small | 2 | null | transformers | 24,526 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-cased-distilled-squad-finetuned-squad-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-distilled-squad-finetuned-squad-small
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nchervyakov/super-model | 6f91717b56e57bf72a0cb3694b795a8e0e345302 | 2021-05-20T01:28:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | nchervyakov | null | nchervyakov/super-model | 2 | null | transformers | 24,527 | hello |
ncoop57/codeformer-java | ca3f7fa6c2e9dfc302f8febdd0437d3e8d19a83e | 2021-09-30T14:18:00.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | ncoop57 | null | ncoop57/codeformer-java | 2 | null | sentence-transformers | 24,528 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 14202 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
negfir/new-wikitext2 | b47668655c03e8a64848b88f8094bd126c932c4d | 2022-03-09T16:53:41.000Z | [
"pytorch",
"tensorboard",
"squeezebert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/new-wikitext2 | 2 | null | transformers | 24,529 | Entry not found |
new5558/wangchan-course | c083c8028bd365d76a91665ff5dd457bbf328445 | 2021-12-05T20:55:07.000Z | [
"pytorch",
"tf",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | new5558 | null | new5558/wangchan-course | 2 | null | transformers | 24,530 | hello
hello
|
nfliu/roberta_s2orc_books_wiki_bpe_32k | e2c0265230ec34c98f66a8b00d310282add44557 | 2021-12-08T21:56:00.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nfliu | null | nfliu/roberta_s2orc_books_wiki_bpe_32k | 2 | null | transformers | 24,531 | Entry not found |
nfliu/roberta_s2orc_bpe_32k | 2155bfba2e89ecd8d3b636d876a1ff84bbc9b2c0 | 2021-12-08T22:05:14.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nfliu | null | nfliu/roberta_s2orc_bpe_32k | 2 | null | transformers | 24,532 | Entry not found |
niclas/model_sv_4 | eff9a40a667916610754f8c51397072408a41442 | 2021-12-22T23:49:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | niclas | null | niclas/model_sv_4 | 2 | null | transformers | 24,533 | Entry not found |
niclas/models_sv_7 | 222800c14a4c8f28f63bf5de455696881b385cf1 | 2022-02-21T21:18:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | niclas | null | niclas/models_sv_7 | 2 | null | transformers | 24,534 | Entry not found |
nicoladecao/msmarco-word2vec256000-distilbert-base-uncased | b11fda7b499374f6c5423100e5f4b2f350f48c0c | 2022-02-18T11:57:55.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | nicoladecao | null | nicoladecao/msmarco-word2vec256000-distilbert-base-uncased | 2 | null | transformers | 24,535 | ---
license: mit
---
|
nielsr/canine-c | 30848eb7ceb4581a2a99e138eb756d69324bebba | 2021-06-29T11:37:15.000Z | [
"pytorch",
"canine",
"feature-extraction",
"transformers"
] | feature-extraction | false | nielsr | null | nielsr/canine-c | 2 | null | transformers | 24,536 | Entry not found |
nielsr/coref-bert-large | 26b6e0a353e83236a8cbaf9395cb97e1bdafd0e7 | 2021-01-21T10:06:48.000Z | [
"pytorch",
"en",
"dataset:wikipedia",
"dataset:quoref",
"dataset:docred",
"dataset:fever",
"dataset:gap",
"dataset:winograd_wsc",
"dataset:winogender",
"dataset:glue",
"arxiv:2004.06870",
"transformers",
"exbert",
"license:apache-2.0"
] | null | false | nielsr | null | nielsr/coref-bert-large | 2 | null | transformers | 24,537 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- wikipedia
- quoref
- docred
- fever
- gap
- winograd_wsc
- winogender
- glue
---
# CorefBERT large model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in
[this paper](https://arxiv.org/abs/2004.06870) and first released in
[this repository](https://github.com/thunlp/CorefBERT).
Disclaimer: The team releasing CorefBERT did not write a model card for this model so this model card has been written by me.
## Model description
CorefBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the
mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefBERT model as inputs.
### BibTeX entry and citation info
```bibtex
@misc{ye2020coreferential,
title={Coreferential Reasoning Learning for Language Representation},
author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},
year={2020},
eprint={2004.06870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
nielsr/deformable-detr-single-scale | e271bd29db67d4918195b83ab33d6d86cd97719c | 2022-02-01T13:26:54.000Z | [
"pytorch",
"deformable_detr",
"transformers"
] | null | false | nielsr | null | nielsr/deformable-detr-single-scale | 2 | null | transformers | 24,538 | Entry not found |
nielsr/deformable-detr-with-box-refine-two-stage | d9f8b025bd654cc5a672499877baf396cf4b40a8 | 2022-02-01T13:18:39.000Z | [
"pytorch",
"deformable_detr",
"transformers"
] | null | false | nielsr | null | nielsr/deformable-detr-with-box-refine-two-stage | 2 | null | transformers | 24,539 | Entry not found |
nielsr/deformable-detr-with-box-refine | 1e8a6c40a4e689ea712908f4a64e3c15c9e1d868 | 2022-02-01T13:21:30.000Z | [
"pytorch",
"deformable_detr",
"transformers"
] | null | false | nielsr | null | nielsr/deformable-detr-with-box-refine | 2 | null | transformers | 24,540 | Entry not found |
nielsr/dino_vitb16 | 7d14921fb6caa80c31d2983a9186054ef85d71e3 | 2021-08-25T11:57:11.000Z | [
"pytorch",
"vit",
"feature-extraction",
"transformers"
] | feature-extraction | false | nielsr | null | nielsr/dino_vitb16 | 2 | null | transformers | 24,541 | I've converted the DINO checkpoints from the [official repo](https://github.com/facebookresearch/dino):
You can use it as follows:
```python
from transformers import ViTModel
model = ViTModel.from_pretrained("nielsr/dino_vitb16", add_pooling_layer=False)
``` |
nielsr/tapex-large | 15e33c38efaf7afb49578392b1211ef3235bae13 | 2022-05-17T07:31:39.000Z | [
"pytorch",
"tapex",
"text2text-generation",
"en",
"arxiv:2107.07653",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | nielsr | null | nielsr/tapex-large | 2 | 1 | transformers | 24,542 | ---
language: en
tags:
- tapex
license: apache-2.0
inference: false
---
TAPEX-large model pre-trained-only model. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
To load it and run inference, you can do the following:
```
from transformers import BartTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large")
model = BartForConditionalGeneration.from_pretrained("nielsr/tapex-large")
# create table
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
# turn into dict
table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]}
# turn into format TAPEX expects
# define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py
linearizer = IndexedRowTableLinearize()
linear_table = linearizer.process_table(table_dict)
# add query
query = "SELECT ... FROM ..."
joint_input = query + " " + linear_table
# encode
encoding = tokenizer(joint_input, return_tensors="pt")
# forward pass
outputs = model.generate(**encoding)
# decode
tokenizer.batch_decode(outputs, skip_special_tokens=True)
``` |
nikitam/mbert-resp-en-it | 87da19aa704b89530b55ae90487634b8a3ba8926 | 2021-10-25T20:32:42.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-resp-en-it | 2 | null | transformers | 24,543 | Entry not found |
nikitam/mbert-xdm-en-it | 15e9699fbeab3a64601b5f16a8f2a73ecbbbd5a1 | 2021-10-25T21:36:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-xdm-en-it | 2 | null | transformers | 24,544 | Entry not found |
nikkindev/cave | 5d8ce6b71d867b764c9e4f22c38e0d05433d8717 | 2021-06-04T12:11:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | nikkindev | null | nikkindev/cave | 2 | null | transformers | 24,545 | ---
tags:
- conversational
---
Cave Johnson in town! |
ninahrostozova/xlm-roberta-base-finetuned-marc | a99540aded8711da06ad6fd0a990d98414f39bdd | 2021-10-16T11:27:29.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | ninahrostozova | null | ninahrostozova/xlm-roberta-base-finetuned-marc | 2 | null | transformers | 24,546 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1698
- Mae: 0.6090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1662 | 1.0 | 333 | 1.2084 | 0.7068 |
| 1.0122 | 2.0 | 666 | 1.1698 | 0.6090 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
nlokam/ada_V.6 | 2d18a3f650b0cdfd58f324a0bbbb9d7cc790af5a | 2022-01-29T18:07:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | nlokam | null | nlokam/ada_V.6 | 2 | null | transformers | 24,547 | ---
tags:
- conversational
---
# Ada model |
nlokam/ada_V.7 | e0660b7a1b4a10cff0e167204f406404b96d3786 | 2022-06-11T20:15:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | nlokam | null | nlokam/ada_V.7 | 2 | null | transformers | 24,548 | ---
tags:
- conversational
---
# Ada model |
nlp-en-es/bertin-large-finetuned-sqac | 3777b086656e4f769c05ab1a45edc6e643d00c0e | 2021-10-03T17:23:54.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"transformers",
"QA",
"Q&A",
"autotrain_compatible"
] | question-answering | false | nlp-en-es | null | nlp-en-es/bertin-large-finetuned-sqac | 2 | 2 | transformers | 24,549 | ---
language: es
tags:
- QA
- Q&A
datasets:
- BSC-TeMU/SQAC
---
# BERTIN (large) fine-tuned on **SQAC** for Spanish **QA** 📖❓
[BERTIN](https://huggingface.co/flax-community/bertin-roberta-large-spanish) fine-tuned on [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) for **Q&A** downstream task. |
nlp-en-es/roberta-base-bne-finetuned-sqac | 2cdaab873b4f624a8adb627a1b4dd47babda90bc | 2021-10-05T15:03:51.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:sqac",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | nlp-en-es | null | nlp-en-es/roberta-base-bne-finetuned-sqac | 2 | 1 | transformers | 24,550 | ---
language: es
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sqac
metrics:
- f1
model-index:
- name: roberta-base-bne-finetuned-sqac
results:
- task:
name: Question Answering
type: Question-Answering
dataset:
name: sqac
type: sqac
args:
metrics:
- name: f1
type: f1
value: 0.7903
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-sqac
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9971 | 1.0 | 1196 | 0.8646 |
| 0.482 | 2.0 | 2392 | 0.9334 |
| 0.1652 | 3.0 | 3588 | 1.2111 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
nlpconnect/dpr-nq-reader-roberta-base | 6917660838ccbfa2deea31fbdfbd3c71975d3e4f | 2022-01-02T09:50:08.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpconnect | null | nlpconnect/dpr-nq-reader-roberta-base | 2 | null | transformers | 24,551 | Entry not found |
nlpunibo/distilbert_base_config1 | 284eefd81a2656d5e73716382cc7d39304b36852 | 2021-02-19T14:31:23.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/distilbert_base_config1 | 2 | null | transformers | 24,552 | Entry not found |
nlpunibo/distilbert_base_config2 | 957d5d798d1db96fbe63331e8114234b69924904 | 2021-02-19T14:36:05.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/distilbert_base_config2 | 2 | null | transformers | 24,553 | Entry not found |
nlpunibo/distilbert_classifier2 | d198d639c6880f98f9910b533f0577f4b9059547 | 2021-02-20T15:04:51.000Z | [
"pytorch",
"distilbert",
"transformers"
] | null | false | nlpunibo | null | nlpunibo/distilbert_classifier2 | 2 | null | transformers | 24,554 | Entry not found |
nlpunibo/distilbert_convolutional_classifier | 7ebe541e2dd0e9b39a325cbd493492048569a2ef | 2021-03-21T14:59:17.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/distilbert_convolutional_classifier | 2 | null | transformers | 24,555 | Entry not found |
nlrgroup/Alice_fine_tuned | 27ddd6e43f7630707b3a2ae990bc5b68e6aa879c | 2021-12-01T21:30:19.000Z | [
"pytorch",
"xlnet",
"text-generation",
"transformers"
] | text-generation | false | nlrgroup | null | nlrgroup/Alice_fine_tuned | 2 | null | transformers | 24,556 | Entry not found |
nostalgebraist/nostalgebraist-autoresponder-2_7b | 80b1991dbce13af9fea669b414b5b59c0548832c | 2021-05-15T02:36:48.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | nostalgebraist | null | nostalgebraist/nostalgebraist-autoresponder-2_7b | 2 | null | transformers | 24,557 | |
notentered/roberta-base-finetuned-cola | 4702b3a33548f56250ae9a4f194c8ee05404b941 | 2022-02-18T10:27:58.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | notentered | null | notentered/roberta-base-finetuned-cola | 2 | null | transformers | 24,558 | Entry not found |
ntrnghia/mrpc_vn | a1cd84b343d47e4ef2704c5db01b1f3f456b6ba0 | 2021-05-20T02:08:11.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ntrnghia | null | ntrnghia/mrpc_vn | 2 | null | transformers | 24,559 | Entry not found |
nws/test_model | 2ee7c454ba400007c50375eed26c04cd612626f2 | 2021-11-02T12:48:19.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nws | null | nws/test_model | 2 | null | transformers | 24,560 | Entry not found |
nytestalkerq/DialoGPT-medium-joshua | 38e3ee0db9f80a839c9bb9c0cf595a08774ac6f4 | 2021-06-04T02:29:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | nytestalkerq | null | nytestalkerq/DialoGPT-medium-joshua | 2 | null | transformers | 24,561 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
nyu-mll/roberta-base-100M-2 | 33062ce4961fc447815f39b410a5a272a6a4728a | 2021-05-20T18:54:59.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nyu-mll | null | nyu-mll/roberta-base-100M-2 | 2 | null | transformers | 24,562 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
nyu-mll/roberta-base-10M-2 | 3155af7150884a514e02fd57a579a8edcdb0154e | 2021-05-20T18:58:09.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nyu-mll | null | nyu-mll/roberta-base-10M-2 | 2 | null | transformers | 24,563 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
nyu-mll/roberta-base-1B-2 | 569f553e71d3c43260f81a7b79389b7bdc96a9ca | 2021-05-20T19:04:39.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nyu-mll | null | nyu-mll/roberta-base-1B-2 | 2 | null | transformers | 24,564 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
o2poi/sst2-eda-albert | 22cfb692bd2def9a13f0918cd158e4abb4981704 | 2021-06-11T12:57:56.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | o2poi | null | o2poi/sst2-eda-albert | 2 | null | transformers | 24,565 | Entry not found |
o2poi/sst2-eda-bert-uncased | 96ee8a85f0f4be4fc83b0db2feee4e0e2e873791 | 2021-06-11T15:44:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | o2poi | null | o2poi/sst2-eda-bert-uncased | 2 | null | transformers | 24,566 | Entry not found |
o2poi/sst2-eda-roberta | 6ba62de4e8e6f7ceacb1d0af01fb6519277a4afe | 2021-06-11T13:03:15.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | o2poi | null | o2poi/sst2-eda-roberta | 2 | null | transformers | 24,567 | Entry not found |
obito69/DialoGPT-small-Doctorstrange | 4a47abe56d205fd5f7f6eae9fe0adc9b913786ee | 2021-09-30T16:13:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | obito69 | null | obito69/DialoGPT-small-Doctorstrange | 2 | null | transformers | 24,568 | ----
tags:
- conversational
---
# Doctor strange DialGPT model |
obss/mt5-small-3task-highlight-combined3 | 4c99115625b50fc17ea200e606fef7d7487f92b6 | 2021-12-03T23:49:29.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"tr",
"dataset:tquad1",
"dataset:tquad2",
"dataset:xquad",
"arxiv:2111.06476",
"transformers",
"question-generation",
"answer-extraction",
"question-answering",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | obss | null | obss/mt5-small-3task-highlight-combined3 | 2 | null | transformers | 24,569 | ---
language: tr
datasets:
- tquad1
- tquad2
- xquad
tags:
- text2text-generation
- question-generation
- answer-extraction
- question-answering
- text-generation
pipeline_tag: text2text-generation
widget:
- text: "generate question: Legendary Entertainment, 2016 yılında bilimkurgu romanı Dune'un <hl> film ve TV haklarını <hl> satın aldı. Geliştirme kısa bir süre sonra başladı. Villeneuve projeye olan ilgisini dile getirdi ve resmi olarak yönetmen olarak imza attı. Roth ve Spaihts ile birlikte çalışarak senaryoyu iki bölüme ayırdı ve 1965 romanının 21. yüzyıla güncellenmiş bir uyarlamasını ekledi."
example_title: "Question Generation (Movie)"
- text: "generate question: Fatih Sultan Mehmet, Cenevizlilerin önemli üslerinden Amasra’yı aldı. 1479’da <hl> bir antlaşma yaparak <hl> Venedik'le 16 yıllık savaşa son verdi."
example_title: "Question Generation (History)"
- text: "generate question: Cenevizlilerin önemli üslerinden Amasra’yı aldı. 1479’da bir antlaşma yaparak <hl> Venedik'le <hl> 16 yıllık savaşa sona verdi."
example_title: "Question Generation (History 2)"
- text: "extract answers: Cenevizlilerin önemli üslerinden Amasra’yı aldı. <hl> 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi. <hl>"
example_title: "Answer Extraction (History)"
- text: "question: Bu model ne ise yarar? context: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme / Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir."
example_title: "Answer Extraction (Open Domain)"
license: cc-by-4.0
---
# mt5-small for Turkish Question Generation
Automated question generation and question answering using text-to-text transformers by OBSS AI.
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-highlight-combined3')
```
## Citation 📜
```
@article{akyon2021automated,
title={Automated question generation and question answering from Turkish texts using text-to-text transformers},
author={Akyon, Fatih Cagatay and Cavusoglu, Devrim and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin},
journal={arXiv preprint arXiv:2111.06476},
year={2021}
}
```
## Overview ✔️
**Language model:** mt5-small
**Language:** Turkish
**Downstream-task:** Extractive QA/QG, Answer Extraction
**Training data:** TQuADv2-train, TQuADv2-val, XQuAD.tr
**Code:** https://github.com/obss/turkish-question-generation
**Paper:** https://arxiv.org/abs/2111.06476
## Hyperparameters
```
batch_size = 256
n_epochs = 15
base_LM_model = "mt5-small"
max_source_length = 512
max_target_length = 64
learning_rate = 1.0e-3
task_lisst = ["qa", "qg", "ans_ext"]
qg_format = "highlight"
```
## Performance
Refer to [paper](https://arxiv.org/abs/2111.06476).
## Usage 🔥
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-highlight-combined3')
context = """
Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır.
Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap
üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme
/ Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir.
Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.
"""
# a) Fully Automated Question Generation
generation_api(task='question-generation', context=context)
# b) Question Answering
question = "Bu model ne işe yarar?"
generation_api(task='question-answering', context=context, question=question)
# b) Answer Extraction
generation_api(task='answer-extraction', context=context)
```
|
odinmay/zackbotmodel | 5d93e5b1b0eb3453e1503326949581784f4cfa03 | 2021-06-03T21:37:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | odinmay | null | odinmay/zackbotmodel | 2 | null | transformers | 24,570 | ---
tags:
- conversational
--- |
ohshimalab/bert-base-minpaku | acf6b651419d0ae645bb5c3bcd7bc58f635c9ff6 | 2022-02-08T05:22:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | ohshimalab | null | ohshimalab/bert-base-minpaku | 2 | null | transformers | 24,571 | ---
license: mit
---
|
orendar/en_he_large | 2cd4d1135007ede8fdf1e36c9eb8e54918c2ba6e | 2022-05-08T13:14:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | orendar | null | orendar/en_he_large | 2 | null | transformers | 24,572 | Entry not found |
osama7/t5-summarization-multinews | 826dc38918d1702fb1ebe78739fca9cdcbbaa09d | 2022-01-30T20:42:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | osama7 | null | osama7/t5-summarization-multinews | 2 | null | transformers | 24,573 | This is a t5-base model trained on the multi_news dataset for abstraction summarization |
osanseviero/asr-with-transformers-wav2vec2 | 65955e59542a7515b8ff85af136930a380e57c5b | 2021-11-04T15:38:38.000Z | [
"pytorch",
"tf",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"superb",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | osanseviero | null | osanseviero/asr-with-transformers-wav2vec2 | 2 | null | superb | 24,574 | ---
benchmark: superb
library_name: superb
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- superb
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# Fork of Wav2Vec2-Base-960h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Tokenizer, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and tokenizer
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# tokenize
input_values = tokenizer(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
input_values = tokenizer(batch["speech"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.4 | 8.6 | |
osanseviero/distilbert-base-nli-wkpooling | 29db279c53b58c2ce96e572c6a5956f60e3d5c81 | 2021-05-04T12:35:09.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | osanseviero | null | osanseviero/distilbert-base-nli-wkpooling | 2 | null | transformers | 24,575 | Entry not found |
osanseviero/flair-ner-english | eafe20407ee6a0abc614d9d6eda4c46489b05ff5 | 2021-05-19T14:44:12.000Z | [
"pytorch",
"en",
"dataset:conll2003",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | osanseviero | null | osanseviero/flair-ner-english | 2 | null | flair | 24,576 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- conll2003
widget:
- text: "George Washington went to Washington"
---
## English NER in Flair (default model) |
osanseviero/my_new_model | 5b6825a74420339c7c014e91ef4c0a284c703f75 | 2021-06-07T14:27:42.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers"
] | feature-extraction | false | osanseviero | null | osanseviero/my_new_model | 2 | null | sentence-transformers | 24,577 | ---
tags:
- sentence-transformers
- feature-extraction
---
# Name of Model
<!--- Describe your model here -->
## Model Description
The model consists of the following layers:
(0) Base Transformer Type: RobertaModel
(1) mean Pooling
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence"]
model = SentenceTransformer('model_name')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('model_name')
model = AutoModel.from_pretrained('model_name')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training Procedure
<!--- Describe how your model was trained -->
## Evaluation Results
<!--- Describe how your model was evaluated -->
## Citing & Authors
<!--- Describe where people can find more information -->
|
osunlp/ReasonBERT-RoBERTa-base | 6e0e8be71a12b7b017a84933718d924b0204954a | 2022-01-23T07:49:47.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | osunlp | null | osunlp/ReasonBERT-RoBERTa-base | 2 | null | transformers | 24,578 | Entry not found |
owen99630/catexp2 | 2858b787e87a38c972661ff9a7bfed2af75cef2c | 2021-10-26T04:58:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | owen99630 | null | owen99630/catexp2 | 2 | null | transformers | 24,579 | {0: 'Anorexia',
1: 'Anxiety',
2: 'Bullying',
3: 'Care',
4: 'Creativity',
5: 'Culture',
6: 'Depression',
7: 'Friends',
8: 'Getting help',
9: 'Happiness',
10: 'Helping others',
11: 'Helping yourself',
12: 'Hope',
13: 'Learning',
14: 'Life Issues',
15: 'Mental Health',
16: 'Mental Health Matters',
17: 'Mental health awareness',
18: 'PTSD',
19: 'Positivity',
20: 'Resilience',
21: 'Self-care',
22: 'Sharing',
23: 'Support',
24: 'University'} |
owen99630/experience | d32d2af427e3ada77505d7b8906a963c6ec4dc7b | 2021-09-28T12:19:46.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | owen99630 | null | owen99630/experience | 2 | null | transformers | 24,580 | Entry not found |
p208p2002/gpt2-drcd-qg-hl | d053f48e819a18cf829f44a2185a2e20387d0edf | 2021-05-23T10:52:50.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | p208p2002 | null | p208p2002/gpt2-drcd-qg-hl | 2 | null | transformers | 24,581 | ## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModelForCausalLM,
)
tokenizer = BertTokenizerFast.from_pretrained('p208p2002/gpt2-drcd-qg-hl')
model = AutoModelForCausalLM.from_pretrained('p208p2002/gpt2-drcd-qg-hl')
```
### Input Format
```
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
```
### Input Example
```
哈利·波特是英國作家[HL]羅琳[HL]撰寫的七部幻想小說系列。
```
> 誰撰寫哈利·波特? |
pablouribe/bertstem-copus-administration | 54fd227efd6e7e91c8217a5f8fa6acc2b09d7f00 | 2021-11-19T21:23:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pablouribe | null | pablouribe/bertstem-copus-administration | 2 | null | transformers | 24,582 | Entry not found |
pablouribe/bertstem-copus-guiding | d1ab0e077a0113934c7ec77937bf08571e0c3594 | 2021-11-30T15:10:04.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pablouribe | null | pablouribe/bertstem-copus-guiding | 2 | null | transformers | 24,583 | Entry not found |
pablouribe/bertstem-copus-overfitted | 381881573fb36ab137a81e3a65fb33ee6eb7b514 | 2022-01-18T18:51:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pablouribe | null | pablouribe/bertstem-copus-overfitted | 2 | null | transformers | 24,584 | Entry not found |
pablouribe/bertstem-copus-presenting | ca76e55f740efd93bb735d653143d0f84bfb0229 | 2021-11-22T21:54:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pablouribe | null | pablouribe/bertstem-copus-presenting | 2 | null | transformers | 24,585 | Entry not found |
paola-md/recipes_italian | df0acc61b51b5cf34d43f214f02e90593ae4dd90 | 2022-01-31T23:33:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | paola-md | null | paola-md/recipes_italian | 2 | null | transformers | 24,586 | Entry not found |
parthshukla/quotes_v1 | a363978d5a21eee1212586885c1501b03f035102 | 2021-07-16T07:09:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | parthshukla | null | parthshukla/quotes_v1 | 2 | null | transformers | 24,587 | Entry not found |
patrickvonplaten/big-bird-base-trivia-qa | fdb7d5354aa128f0ca260db2dd760da82f58ce45 | 2021-03-04T12:13:47.000Z | [
"pytorch",
"big_bird",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | patrickvonplaten | null | patrickvonplaten/big-bird-base-trivia-qa | 2 | null | transformers | 24,588 | Entry not found |
patrickvonplaten/bigbird-roberta-base-original-attn | 12642ec53e9b2dc69e7b5f379a0ae726e22c5642 | 2021-03-02T16:11:07.000Z | [
"pytorch",
"big_bird",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | patrickvonplaten | null | patrickvonplaten/bigbird-roberta-base-original-attn | 2 | null | transformers | 24,589 | Entry not found |
patrickvonplaten/hello_2b_3 | 9c30eec9a9df12049c82ee036c9fb8f37708a265 | 2021-11-04T15:11:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/hello_2b_3 | 2 | null | transformers | 24,590 | ---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: hello_2b_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hello_2b_3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5615
- Wer: 0.9808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6389 | 0.92 | 100 | 3.6218 | 1.0 |
| 1.6676 | 1.85 | 200 | 3.2655 | 1.0 |
| 0.3067 | 2.77 | 300 | 3.2273 | 1.0 |
| 0.1924 | 3.7 | 400 | 3.0238 | 0.9999 |
| 0.1777 | 4.63 | 500 | 2.1606 | 0.9991 |
| 0.1481 | 5.55 | 600 | 1.8742 | 0.9982 |
| 0.1128 | 6.48 | 700 | 2.0114 | 0.9994 |
| 0.1806 | 7.4 | 800 | 1.9032 | 0.9984 |
| 0.0399 | 8.33 | 900 | 2.0556 | 0.9996 |
| 0.0729 | 9.26 | 1000 | 2.0515 | 0.9987 |
| 0.0847 | 10.18 | 1100 | 2.2121 | 0.9995 |
| 0.0777 | 11.11 | 1200 | 1.7002 | 0.9923 |
| 0.0476 | 12.04 | 1300 | 1.5262 | 0.9792 |
| 0.0518 | 12.96 | 1400 | 1.5990 | 0.9832 |
| 0.071 | 13.88 | 1500 | 1.6326 | 0.9875 |
| 0.0333 | 14.81 | 1600 | 1.5955 | 0.9870 |
| 0.0369 | 15.74 | 1700 | 1.5577 | 0.9832 |
| 0.0689 | 16.66 | 1800 | 1.5415 | 0.9839 |
| 0.0227 | 17.59 | 1900 | 1.5450 | 0.9878 |
| 0.0472 | 18.51 | 2000 | 1.5642 | 0.9846 |
| 0.0214 | 19.44 | 2100 | 1.6103 | 0.9846 |
| 0.0289 | 20.37 | 2200 | 1.6467 | 0.9898 |
| 0.0182 | 21.29 | 2300 | 1.5268 | 0.9780 |
| 0.0439 | 22.22 | 2400 | 1.6001 | 0.9818 |
| 0.06 | 23.15 | 2500 | 1.5481 | 0.9813 |
| 0.0351 | 24.07 | 2600 | 1.5672 | 0.9820 |
| 0.0198 | 24.99 | 2700 | 1.6303 | 0.9856 |
| 0.0328 | 25.92 | 2800 | 1.5958 | 0.9831 |
| 0.0245 | 26.85 | 2900 | 1.5745 | 0.9809 |
| 0.0885 | 27.77 | 3000 | 1.5455 | 0.9809 |
| 0.0224 | 28.7 | 3100 | 1.5378 | 0.9824 |
| 0.0223 | 29.63 | 3200 | 1.5642 | 0.9810 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/norwegian-roberta-base | 83b0a6d5e9a8b513d68680a1c053ac8c4afb1ced | 2021-05-19T10:12:21.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | patrickvonplaten | null | patrickvonplaten/norwegian-roberta-base | 2 | null | transformers | 24,591 | ## Roberta-Base
This repo trains [roberta-base](https://huggingface.co/roberta-base) from scratch on the [Norwegian training subset of Oscar](https://oscar-corpus.com/) containing roughly 4.7 GB of data according to [this](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) example.
Training is done on a TPUv3-8 in Flax. More statistics on the training run can be found under [tf.hub](https://tensorboard.dev/experiment/GdYmdak2TWeVz0DDRYOrrg).
|
patrickvonplaten/sew-small-100k-timit | bbec12ddceb80cd347df30f4c496962a8930f041 | 2021-10-27T10:44:41.000Z | [
"pytorch",
"tensorboard",
"sew",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/sew-small-100k-timit | 2 | null | transformers | 24,592 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-small-100k-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-small-100k-timit
This model is a fine-tuned version of [asapp/sew-small-100k](https://huggingface.co/asapp/sew-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4926
- Wer: 0.2988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.071 | 0.69 | 100 | 3.0262 | 1.0 |
| 2.9304 | 1.38 | 200 | 2.9297 | 1.0 |
| 2.8823 | 2.07 | 300 | 2.8367 | 1.0 |
| 1.5668 | 2.76 | 400 | 1.2310 | 0.8807 |
| 0.7422 | 3.45 | 500 | 0.7080 | 0.5957 |
| 0.4121 | 4.14 | 600 | 0.5829 | 0.5073 |
| 0.3981 | 4.83 | 700 | 0.5153 | 0.4461 |
| 0.5038 | 5.52 | 800 | 0.4908 | 0.4151 |
| 0.2899 | 6.21 | 900 | 0.5122 | 0.4111 |
| 0.2198 | 6.9 | 1000 | 0.4908 | 0.3803 |
| 0.2129 | 7.59 | 1100 | 0.4668 | 0.3789 |
| 0.3007 | 8.28 | 1200 | 0.4788 | 0.3562 |
| 0.2264 | 8.97 | 1300 | 0.5113 | 0.3635 |
| 0.1536 | 9.66 | 1400 | 0.4950 | 0.3441 |
| 0.1206 | 10.34 | 1500 | 0.5062 | 0.3421 |
| 0.2021 | 11.03 | 1600 | 0.4900 | 0.3283 |
| 0.1458 | 11.72 | 1700 | 0.5019 | 0.3307 |
| 0.1151 | 12.41 | 1800 | 0.4989 | 0.3270 |
| 0.0985 | 13.1 | 1900 | 0.4925 | 0.3173 |
| 0.1412 | 13.79 | 2000 | 0.4868 | 0.3125 |
| 0.1579 | 14.48 | 2100 | 0.4983 | 0.3147 |
| 0.1043 | 15.17 | 2200 | 0.4914 | 0.3091 |
| 0.0773 | 15.86 | 2300 | 0.4858 | 0.3102 |
| 0.1327 | 16.55 | 2400 | 0.5084 | 0.3064 |
| 0.1281 | 17.24 | 2500 | 0.5017 | 0.3025 |
| 0.0845 | 17.93 | 2600 | 0.5001 | 0.3012 |
| 0.0717 | 18.62 | 2700 | 0.4894 | 0.3004 |
| 0.0835 | 19.31 | 2800 | 0.4963 | 0.2998 |
| 0.1181 | 20.0 | 2900 | 0.4926 | 0.2988 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/unispeech-sat-base-plus-timit-ft | 67b7c36ab967f831ca5987da5402622c843ba760 | 2021-10-21T10:05:15.000Z | [
"pytorch",
"tensorboard",
"unispeech-sat",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/unispeech-sat-base-plus-timit-ft | 2 | null | transformers | 24,593 | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: unispeech-sat-base-plus-timit-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unispeech-sat-base-plus-timit-ft
This model is a fine-tuned version of [microsoft/unispeech-sat-base-plus](https://huggingface.co/microsoft/unispeech-sat-base-plus) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6549
- Wer: 0.4051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3838 | 0.69 | 100 | 3.2528 | 1.0 |
| 2.9608 | 1.38 | 200 | 2.9682 | 1.0 |
| 2.9574 | 2.07 | 300 | 2.9346 | 1.0 |
| 2.8555 | 2.76 | 400 | 2.7612 | 1.0 |
| 1.7418 | 3.45 | 500 | 1.5732 | 0.9857 |
| 0.9606 | 4.14 | 600 | 1.0014 | 0.7052 |
| 0.8334 | 4.83 | 700 | 0.7691 | 0.6161 |
| 0.852 | 5.52 | 800 | 0.7169 | 0.5997 |
| 0.5707 | 6.21 | 900 | 0.6821 | 0.5527 |
| 0.4235 | 6.9 | 1000 | 0.6078 | 0.5140 |
| 0.4357 | 7.59 | 1100 | 0.5927 | 0.4982 |
| 0.5004 | 8.28 | 1200 | 0.5814 | 0.4826 |
| 0.3757 | 8.97 | 1300 | 0.5951 | 0.4643 |
| 0.2579 | 9.66 | 1400 | 0.5990 | 0.4581 |
| 0.2087 | 10.34 | 1500 | 0.5864 | 0.4488 |
| 0.3155 | 11.03 | 1600 | 0.5836 | 0.4464 |
| 0.2701 | 11.72 | 1700 | 0.6045 | 0.4348 |
| 0.172 | 12.41 | 1800 | 0.6494 | 0.4344 |
| 0.1529 | 13.1 | 1900 | 0.5915 | 0.4241 |
| 0.2411 | 13.79 | 2000 | 0.6156 | 0.4246 |
| 0.2348 | 14.48 | 2100 | 0.6363 | 0.4206 |
| 0.1429 | 15.17 | 2200 | 0.6394 | 0.4161 |
| 0.1151 | 15.86 | 2300 | 0.6186 | 0.4167 |
| 0.1723 | 16.55 | 2400 | 0.6498 | 0.4124 |
| 0.1997 | 17.24 | 2500 | 0.6541 | 0.4076 |
| 0.1297 | 17.93 | 2600 | 0.6546 | 0.4117 |
| 0.101 | 18.62 | 2700 | 0.6471 | 0.4075 |
| 0.1272 | 19.31 | 2800 | 0.6586 | 0.4065 |
| 0.1901 | 20.0 | 2900 | 0.6549 | 0.4051 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/unispeech-sat-base-plus-timit | 9414135d94c74123c07d699aec13dfa2d3a9f4ab | 2021-10-20T19:43:27.000Z | [
"pytorch",
"tensorboard",
"unispeech-sat",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/unispeech-sat-base-plus-timit | 2 | null | transformers | 24,594 | Entry not found |
patrickvonplaten/wav2vec2-common_voice-ab-demo | 5e09e51d330d5e2766be106c0e5e98ddd75bab67 | 2021-09-22T13:57:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"transformers",
"speech-recognition",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-common_voice-ab-demo | 2 | null | transformers | 24,595 | ---
language:
- ab
license: apache-2.0
tags:
- speech-recognition
- common_voice
- generated_from_trainer
model-index:
- name: wav2vec2-common_voice-ab-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-ab-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 15.1812
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
patrickvonplaten/xls-r-300m-tr-phoneme | fa174f09430cb61281cc8c0d9a948b0fc81b526f | 2021-12-21T11:13:30.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"mozilla-foundation/common_voice_3_0",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/xls-r-300m-tr-phoneme | 2 | null | transformers | 24,596 | ---
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_3_0
- generated_from_trainer
model-index:
- name: xls-r-300m-tr-phoneme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-tr-phoneme
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the mozilla-foundation/common_voice_3_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4378
- Wer: 0.09936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000075
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/xprophetnet-decoder-clm-large-uncased | f03909560fba63319d11e1581b05a0396b1d1bc8 | 2020-10-21T10:25:04.000Z | [
"pytorch",
"xlm-prophetnet",
"text-generation",
"transformers"
] | text-generation | false | patrickvonplaten | null | patrickvonplaten/xprophetnet-decoder-clm-large-uncased | 2 | null | transformers | 24,597 | Entry not found |
pelican/3cls_equal_len | 0736236910e284ebdc3bc983bbb21b56914d1f27 | 2021-12-08T17:37:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | pelican | null | pelican/3cls_equal_len | 2 | null | transformers | 24,598 | Entry not found |
pelican/test_model | 1c5fc482c3e343397d661dd39d932843bd2f666b | 2021-12-07T16:01:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | pelican | null | pelican/test_model | 2 | null | transformers | 24,599 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.