modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mrm8488/xlm-multi-finetuned-xquadv1 | 5bd796ffd38b4fabb39ee949e2d1516e3268d551 | 2020-12-11T21:56:48.000Z | [
"pytorch",
"xlm",
"question-answering",
"multilingual",
"arxiv:1901.07291",
"arxiv:1910.11856",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/xlm-multi-finetuned-xquadv1 | 3 | null | transformers | 21,600 | ---
language: multilingual
thumbnail:
---
# [XLM](https://github.com/facebookresearch/XLM/) (multilingual version) fine-tuned for multilingual Q&A
Released from `Facebook` together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau and fine-tuned on [XQuAD](https://github.com/deepmind/xquad) for multilingual (`11 different languages`) **Q&A** downstream task.
## Details of the language model('xlm-mlm-100-1280')
[Language model](https://github.com/facebookresearch/XLM/#ii-cross-lingual-language-model-pretraining-xlm)
| Languages
| --------- |
| 100 |
It includes the following languages:
<details>
en-es-fr-de-zh-ru-pt-it-ar-ja-id-tr-nl-pl-simple-fa-vi-sv-ko-he-ro-no-hi-uk-cs-fi-hu-th-da-ca-el-bg-sr-ms-bn-hr-sl-zh_yue-az-sk-eo-ta-sh-lt-et-ml-la-bs-sq-arz-af-ka-mr-eu-tl-ang-gl-nn-ur-kk-be-hy-te-lv-mk-zh_classical-als-is-wuu-my-sco-mn-ceb-ast-cy-kn-br-an-gu-bar-uz-lb-ne-si-war-jv-ga-zh_min_nan-oc-ku-sw-nds-ckb-ia-yi-fy-scn-gan-tt-am
</details>
## Details of the downstream task (multilingual Q&A) - Dataset
Deepmind [XQuAD](https://github.com/deepmind/xquad)
Languages covered:
- Arabic: `ar`
- German: `de`
- Greek: `el`
- English: `en`
- Spanish: `es`
- Hindi: `hi`
- Russian: `ru`
- Thai: `th`
- Turkish: `tr`
- Vietnamese: `vi`
- Chinese: `zh`
As the dataset is based on SQuAD v1.1, there are no unanswerable questions in the data. We chose this
setting so that models can focus on cross-lingual transfer.
We show the average number of tokens per paragraph, question, and answer for each language in the
table below. The statistics were obtained using [Jieba](https://github.com/fxsjy/jieba) for Chinese
and the [Moses tokenizer](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl)
for the other languages.
| | en | es | de | el | ru | tr | ar | vi | th | zh | hi |
| --------- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Paragraph | 142.4 | 160.7 | 139.5 | 149.6 | 133.9 | 126.5 | 128.2 | 191.2 | 158.7 | 147.6 | 232.4 |
| Question | 11.5 | 13.4 | 11.0 | 11.7 | 10.0 | 9.8 | 10.7 | 14.8 | 11.5 | 10.5 | 18.7 |
| Answer | 3.1 | 3.6 | 3.0 | 3.3 | 3.1 | 3.1 | 3.1 | 4.5 | 4.1 | 3.5 | 5.6 |
Citation:
<details>
```bibtex
@article{Artetxe:etal:2019,
author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama},
title = {On the cross-lingual transferability of monolingual representations},
journal = {CoRR},
volume = {abs/1910.11856},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.11856}
}
```
</details>
As XQuAD is just an evaluation dataset, I used Data augmentation techniques (scraping, neural machine translation, etc) to obtain more samples and split the dataset in order to have a train and test set. The test set was created in a way that contains the same number of samples for each language. Finally, I got:
| Dataset | # samples |
| ----------- | --------- |
| XQUAD train | 50 K |
| XQUAD test | 8 K |
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/distillation/run_squad_w_distillation.py)
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/xlm-multi-finetuned-xquadv1",
tokenizer="mrm8488/xlm-multi-finetuned-xquadv1"
)
# English
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "Who has been working hard for hugginface/transformers lately?"
})
#Output: {'answer': 'Manuel', 'end': 6, 'score': 8.531880747878265e-05, 'start': 0}
# Russian
qa_pipeline({
'context': "Мануэль Ромеро в последнее время почти не работал в репозитории hugginface / transformers",
'question': "Кто в последнее время усердно работал над обнимашками / трансформерами?"
})
#Output: {'answer': 'работал в репозитории hugginface /','end': 76, 'score': 0.00012340750456964894, 'start': 42}
```
Try it on a Colab (*Do not forget to change the model and tokenizer path in the Colab if necessary*):
<a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Try_mrm8488_xquad_finetuned_uncased_model.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a>
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mtr0930/i-manual_tokenizer_updated | 2e8eb39d31ad4f0860dcd1477ec39df6fa625c0a | 2021-09-14T04:30:22.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mtr0930 | null | mtr0930/i-manual_tokenizer_updated | 3 | null | transformers | 21,601 | Entry not found |
mudes/multilingual-large | 1ba1e0c9c6f4dfd605e098adae798f3de283a544 | 2021-04-15T22:36:53.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mudes | null | mudes/multilingual-large | 3 | null | transformers | 21,602 | # MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans
We provide state-of-the-art models to detect toxic spans in text. We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5).
## Usage
You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed:
```bash
pip install mudes
```
Then you can use the model like this:
```python
from mudes.app.mudes_app import MUDESApp
app = MUDESApp("multilingual-large", use_cuda=False)
print(app.predict_toxic_spans("You motherfucking cunt", spans=True))
```
## System Demonstration
An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/).
## Citing & Authors
If you find this model helpful, feel free to cite our publication
```bash
@inproceedings{ranasinghemudes,
title={{MUDES: Multilingual Detection of Offensive Spans}},
author={Tharindu Ranasinghe and Marcos Zampieri},
booktitle={Proceedings of NAACL},
year={2021}
}
```
```bash
@inproceedings{ranasinghe2021semeval,
title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}},
author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex},
booktitle={Proceedings of SemEval},
year={2021}
}
``` |
muhtasham/autonlp-Doctor_DE-24595546 | 265fa87972bc182858a0cd41aa7e0fc90ae3c527 | 2021-10-22T12:23:10.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"dataset:muhtasham/autonlp-data-Doctor_DE",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | muhtasham | null | muhtasham/autonlp-Doctor_DE-24595546 | 3 | null | transformers | 21,603 | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP 🤗"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 210.5957437893554
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595546
- CO2 Emissions (in grams): 210.5957437893554
## Validation Metrics
- Loss: 0.3092539310455322
- MSE: 0.30925390124320984
- MAE: 0.25015318393707275
- R2: 0.841926941198094
- RMSE: 0.5561060309410095
- Explained Variance: 0.8427215218544006
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595546
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595546", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
muhtasham/autonlp-Doctor_DE-24595548 | 7fc1502296e62552e4e56b6afdeb349f1eacda19 | 2021-10-22T11:58:36.000Z | [
"pytorch",
"roberta",
"text-classification",
"de",
"dataset:muhtasham/autonlp-data-Doctor_DE",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | muhtasham | null | muhtasham/autonlp-Doctor_DE-24595548 | 3 | null | transformers | 21,604 | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP 🤗"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 183.88911013564527
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595548
- CO2 Emissions (in grams): 183.88911013564527
## Validation Metrics
- Loss: 0.3050823509693146
- MSE: 0.3050823509693146
- MAE: 0.2664000689983368
- R2: 0.844059188176304
- RMSE: 0.5523425936698914
- Explained Variance: 0.8472161293029785
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595548
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595548", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595548", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
mujeensung/bert-base-cased_mnli_bc | f1e00c39c417cf04fd28e1609f2f249ad34d2403 | 2022-02-13T05:08:30.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mujeensung | null | mujeensung/bert-base-cased_mnli_bc | 3 | null | transformers | 21,605 | Entry not found |
mwesner/bart-mlm | 1c2c68aebcef8ec85c18a909355a04153a4f5830 | 2021-09-05T13:13:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | false | mwesner | null | mwesner/bart-mlm | 3 | null | transformers | 21,606 | ---
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bart-mlm
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm
This model is a fine-tuned version of [mwesner/bart-mlm](https://huggingface.co/mwesner/bart-mlm) on the CNN/Dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 7.5202 | 1.0 | 15237 | 7.5964 |
| 7.5151 | 2.0 | 30474 | 7.5400 |
| 7.5157 | 3.0 | 45711 | 7.5351 |
| 7.5172 | 4.0 | 60948 | 7.5317 |
| 7.5108 | 5.0 | 76185 | 7.5338 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.3
|
narabzad/saved | 857ec00a183a039578682ef8bbd589c1af66ac88 | 2021-05-20T01:15:05.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | narabzad | null | narabzad/saved | 3 | null | transformers | 21,607 | Entry not found |
nateraw/timm-resnet18-beans-test-2 | e207e8e77091146b481b7e6eeb151576391a76c4 | 2021-09-04T01:13:21.000Z | [
"pytorch",
"tensorboard",
"dataset:beans",
"timm",
"image-classification",
"generated_from_trainer"
] | image-classification | false | nateraw | null | nateraw/timm-resnet18-beans-test-2 | 3 | null | timm | 21,608 | ---
tags:
- image-classification
- timm
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model_index:
- name: timm-resnet18-beans-test-2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metric:
name: Accuracy
type: accuracy
value: 0.5789473684210527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# timm-resnet18-beans-test-2
This model is a fine-tuned version of [resnet18](https://huggingface.co/resnet18) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3225
- Accuracy: 0.5789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2601 | 0.02 | 5 | 2.8349 | 0.5113 |
| 1.8184 | 0.04 | 10 | 1.3225 | 0.5789 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0
- Datasets 1.11.1.dev0
- Tokenizers 0.10.3
|
nateraw/timm-resnet18-imagenette-160px-5-epochs | 0ac3af8c59a37990192449b8b683eea167dc0276 | 2021-09-27T01:16:41.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nateraw | null | nateraw/timm-resnet18-imagenette-160px-5-epochs | 3 | null | timm | 21,609 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for timm-resnet18-imagenette-160px-5-epochs |
nateraw/timm-resnet50-beans-copy | 54304a3286aed5cd892ea2b482f702ca74e6aedf | 2021-10-08T03:16:00.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nateraw | null | nateraw/timm-resnet50-beans-copy | 3 | null | timm | 21,610 | ---
tags:
- timm
- image-classification
library_name: timm
---
|
nates-test-org/cait_xxs24_384 | 794b8fb12c1088ad5406505dc5c336dfad0ae2fa | 2021-10-29T04:33:48.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/cait_xxs24_384 | 3 | null | timm | 21,611 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for cait_xxs24_384 |
nates-test-org/coat_lite_small | 866d04ad7b799ecaf5a98852f905d9c02d201e8e | 2021-10-29T04:37:38.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/coat_lite_small | 3 | null | timm | 21,612 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for coat_lite_small |
nates-test-org/coat_mini | 338e8cff574b10d49bbc1d66222689fd3b7fef88 | 2021-10-29T04:39:18.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/coat_mini | 3 | null | timm | 21,613 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for coat_mini |
navteca/ms-marco-electra-base | c3ca93e7e3e4634e0931cc1a64e5477f2e70e3a7 | 2021-03-10T14:28:14.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | navteca | null | navteca/ms-marco-electra-base | 3 | null | transformers | 21,614 | # Cross-Encoder for MS Marco
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
This model uses [electra-base](https://huggingface.co/google/electra-base-discriminator).
## Training Data
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) dataset. The model will predict a score between 0 and 1: Given a question and paragraph, can the question be answered by the paragraph?.
## Usage and Performance
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')])
print(scores)
```
|
navteca/multi-qa-mpnet-base-cos-v1 | abae445a601aa8fb2a3f8dc0a7c7fcddc70d9617 | 2022-02-09T14:55:14.000Z | [
"pytorch",
"mpnet",
"fill-mask",
"en",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:mit"
] | sentence-similarity | false | navteca | null | navteca/multi-qa-mpnet-base-cos-v1 | 3 | null | sentence-transformers | 21,615 | ---
language: en
license: mit
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- sentence-transformers
---
# Multi QA MPNet base model for Semantic Search
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources.
This model uses [`mpnet-base`](https://huggingface.co/microsoft/mpnet-base).
## Training Data
We use the concatenation from multiple datasets to fine-tune this model. In total we have about 215M (question, answer) pairs. The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** |
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 768 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product, cosine-similarity, or euclidean distance |
Note: This model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
## Usage and Performance
The trained model can be used like this:
```python
from sentence_transformers import SentenceTransformer, util
question = "That is a happy person"
contexts = [
"That is a happy dog",
"That is a very happy person",
"Today is a sunny day"
]
# Load the model
model = SentenceTransformer('navteca//multi-qa-mpnet-base-cos-v1')
# Encode question and contexts
question_emb = model.encode(question)
contexts_emb = model.encode(contexts)
# Compute dot score between question and all contexts embeddings
result = util.dot_score(question_emb, contexts_emb)[0].cpu().tolist()
print(result)
#[
# 0.60806852579116820,
# 0.94949364662170410,
# 0.29836517572402954
#]
|
ncduy/MiniLM-L12-H384-uncased-finetuned-squad | 49b621f5b5f6606b8e48ce5b98d2d3b343574b82 | 2021-12-09T14:45:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ncduy | null | ncduy/MiniLM-L12-H384-uncased-finetuned-squad | 3 | null | transformers | 21,616 | Entry not found |
ncduy/xlm-roberta-base-squad2-distilled-finetuned-chaii-small | 302a6781857866fdca6edac907d21e5fc076b2c0 | 2021-12-09T14:02:23.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | ncduy | null | ncduy/xlm-roberta-base-squad2-distilled-finetuned-chaii-small | 3 | null | transformers | 21,617 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-squad2-distilled-finetuned-chaii-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-squad2-distilled-finetuned-chaii-small
This model is a fine-tuned version of [deepset/xlm-roberta-base-squad2-distilled](https://huggingface.co/deepset/xlm-roberta-base-squad2-distilled) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
neibyr/MiniRoberta_oscar_hindi_tamil | 3da77384379804bed886baa6ef02eccb3c6d7992 | 2021-10-26T17:53:29.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | neibyr | null | neibyr/MiniRoberta_oscar_hindi_tamil | 3 | null | transformers | 21,618 | Entry not found |
neuralspace-reverie/indic-transformers-hi-xlmroberta | 75fcf55840bd7aa35c0e0d8ee9a57fa8641971af | 2020-12-11T21:57:29.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"hi",
"transformers",
"MaskedLM",
"Hindi",
"XLMRoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"autotrain_compatible"
] | fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-hi-xlmroberta | 3 | 1 | transformers | 21,619 | ---
language:
- hi
tags:
- MaskedLM
- Hindi
- XLMRoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Hindi XLMRoBERTa
## Model description
This is a XLMRoBERTa language model pre-trained on ~3 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-hi-xlmroberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-hi-xlmroberta')
text = "आपका स्वागत हैं"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
nguyenthanhasia/BERTLaw | 6b4e0cf9eba5c6298fa63e89f345b35a420c3013 | 2021-05-20T01:48:06.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"transformers"
] | null | false | nguyenthanhasia | null | nguyenthanhasia/BERTLaw | 3 | null | transformers | 21,620 | Entry not found |
nielsr/tapex-large-finetuned-wikisql | f0b3100d1a9260b13658af1c441d2d363bece913 | 2022-01-13T14:40:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:wikisql",
"arxiv:2107.07653",
"transformers",
"tapex",
"table-question-answering",
"license:apache-2.0",
"autotrain_compatible"
] | table-question-answering | false | nielsr | null | nielsr/tapex-large-finetuned-wikisql | 3 | null | transformers | 21,621 | ---
language: en
tags:
- tapex
- table-question-answering
license: apache-2.0
datasets:
- wikisql
inference: false
---
TAPEX-large model fine-tuned on WikiSQL. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
To load it and run inference, you can do the following:
```
from transformers import BartTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-wikisql")
model = BartForConditionalGeneration.from_pretrained("nielsr/tapex-large-finetuned-wikisql")
# create table
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
# turn into dict
table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]}
# turn into format TAPEX expects
# define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py
linearizer = IndexedRowTableLinearize()
linear_table = linearizer.process_table(table_dict)
# add question
question = "how many movies does George Clooney have?"
joint_input = question + " " + linear_table
# encode
encoding = tokenizer(joint_input, return_tensors="pt")
# forward pass
outputs = model.generate(**encoding)
# decode
tokenizer.batch_decode(outputs, skip_special_tokens=True)
``` |
niepan/bert_funting_test_ai10 | 18f75c7fcb9aab517a1d374d98c28db784bc14ea | 2021-09-03T13:49:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | niepan | null | niepan/bert_funting_test_ai10 | 3 | null | transformers | 21,622 | Entry not found |
nikhil6041/wav2vec2-large-xlsr-hindi-commonvoice | 7582a4d0858c18dcf2fb5e1ba46e7ae8ac4b40a7 | 2021-11-07T09:54:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | nikhil6041 | null | nikhil6041/wav2vec2-large-xlsr-hindi-commonvoice | 3 | null | transformers | 21,623 | Entry not found |
nikhil6041/wav2vec2-large-xlsr-hindi_commonvoice | 8e680bb31fc9b69cc3e8c8ea628efe5f95921f96 | 2021-11-07T06:23:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nikhil6041 | null | nikhil6041/wav2vec2-large-xlsr-hindi_commonvoice | 3 | null | transformers | 21,624 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hindi_commonvoice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hindi_commonvoice
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5947
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 24.0069 | 4.0 | 20 | 40.3956 | 1.0 |
| 18.1097 | 8.0 | 40 | 15.3603 | 1.0 |
| 7.1344 | 12.0 | 60 | 5.2695 | 1.0 |
| 4.0032 | 16.0 | 80 | 3.7403 | 1.0 |
| 3.4894 | 20.0 | 100 | 3.5724 | 1.0 |
| 3.458 | 24.0 | 120 | 3.6164 | 1.0 |
| 3.4412 | 28.0 | 140 | 3.5947 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
nlpunibo/albert | c447d6b562487449cf9f3436255c8ed028347702 | 2021-02-19T14:13:04.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/albert | 3 | null | transformers | 21,625 | Entry not found |
noah-ai/mt5-base-question-generation-vi | c71d95f87f123f99867b59778f55594be1f80c10 | 2021-07-31T01:23:40.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | noah-ai | null | noah-ai/mt5-base-question-generation-vi | 3 | 1 | transformers | 21,626 | ## Model description
This model is a sequence-to-sequence question generator that takes an answer and context as an input and generates a question as an output. It is based on a pre-trained mt5-base by [Google](https://github.com/google-research/multilingual-t5) model.
## Training data
The model was fine-tuned on [XQuAD](https://github.com/deepmind/xquad)
## Example usage
```python
from transformers import MT5ForConditionalGeneration, AutoTokenizer
import torch
model = MT5ForConditionalGeneration.from_pretrained("noah-ai/mt5-base-question-generation-vi")
tokenizer = AutoTokenizer.from_pretrained("noah-ai/mt5-base-question-generation-vi")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
# Content used to create a set of questions
context = '''Thành phố Hồ Chí Minh (còn gọi là Sài Gòn) tên gọi cũ trước 1975 là Sài Gòn hay Sài Gòn-Gia Định là thành phố lớn nhất ở Việt Nam về dân số và quy mô đô thị hóa. Đây còn là trung tâm kinh tế, chính trị, văn hóa và giáo dục tại Việt Nam. Thành phố Hồ Chí Minh là thành phố trực thuộc trung ương thuộc loại đô thị đặc biệt của Việt Nam cùng với thủ đô Hà Nội.Nằm trong vùng chuyển tiếp giữa Đông Nam Bộ và Tây Nam Bộ, thành phố này hiện có 16 quận, 1 thành phố và 5 huyện, tổng diện tích 2.061 km². Theo kết quả điều tra dân số chính thức vào thời điểm ngày một tháng 4 năm 2009 thì dân số thành phố là 7.162.864 người (chiếm 8,34% dân số Việt Nam), mật độ dân số trung bình 3.419 người/km². Đến năm 2019, dân số thành phố tăng lên 8.993.082 người và cũng là nơi có mật độ dân số cao nhất Việt Nam. Tuy nhiên, nếu tính những người cư trú không đăng ký hộ khẩu thì dân số thực tế của thành phố này năm 2018 là gần 14 triệu người.'''
encoding = tokenizer.encode_plus(context, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
output = model.generate(input_ids=input_ids, attention_mask=attention_masks, max_length=256)
question = tokenizer.decode(output[0], skip_special_tokens=True,clean_up_tokenization_spaces=True)
question
#question: Thành phố hồ chí minh có bao nhiêu quận?
```
> Created by [Duong Thanh Nguyen](https://www.facebook.com/thanhnguyen.dev) |
nonamenlp/thai_new_gen_from_kw | 9c50174da3114b9ce92583463edbf6d4ec99c0c3 | 2021-06-26T16:46:09.000Z | [
"pytorch",
"jax",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | nonamenlp | null | nonamenlp/thai_new_gen_from_kw | 3 | null | transformers | 21,627 | # Generate News in Thai language by keywords.
MODEL_NAME = 'nonamenlp/news_gen'
TOKENIZER_NAME = "nonamenlp/news_gen"
trained_model = MT5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)
tokenizer = T5Tokenizer.from_pretrained(TOKENIZER_NAME) |
notentered/roberta-large-finetuned-cola | 96293078a8ef2fd8fe94ef6e67e06a0305de2f7a | 2022-02-18T10:23:36.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | notentered | null | notentered/roberta-large-finetuned-cola | 3 | null | transformers | 21,628 | Entry not found |
nurkayevaa/autonlp-bert-covid-407910458 | 9fa87133a5a4130a71f57198d614d089bba58efd | 2021-12-11T05:29:05.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:nurkayevaa/autonlp-data-bert-covid",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | nurkayevaa | null | nurkayevaa/autonlp-bert-covid-407910458 | 3 | null | transformers | 21,629 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- nurkayevaa/autonlp-data-bert-covid
co2_eq_emissions: 9.72797586719897
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 407910458
- CO2 Emissions (in grams): 9.72797586719897
## Validation Metrics
- Loss: 0.20907048881053925
- Accuracy: 0.9119825708061002
- Precision: 0.8912721893491125
- Recall: 0.9563492063492064
- AUC: 0.9698454873092555
- F1: 0.9226646248085759
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/nurkayevaa/autonlp-bert-covid-407910458
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nurkayevaa/autonlp-bert-covid-407910458", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("nurkayevaa/autonlp-bert-covid-407910458", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
nyu-mll/roberta-base-100M-3 | e847da26de4c9e0db63247d7e50c0aac6bd910ed | 2021-05-20T18:56:02.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nyu-mll | null | nyu-mll/roberta-base-100M-3 | 3 | null | transformers | 21,630 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
obss/mt5-small-3task-both-tquad2 | d3e3c8704c995a58a0a6de1cb5f536e16c9c8aa4 | 2021-12-04T00:10:42.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"tr",
"dataset:tquad1",
"dataset:tquad2",
"dataset:xquad",
"arxiv:2111.06476",
"transformers",
"question-generation",
"answer-extraction",
"question-answering",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | obss | null | obss/mt5-small-3task-both-tquad2 | 3 | null | transformers | 21,631 | ---
language: tr
datasets:
- tquad1
- tquad2
- xquad
tags:
- text2text-generation
- question-generation
- answer-extraction
- question-answering
- text-generation
pipeline_tag: text2text-generation
widget:
- text: "answer: film ve TV haklarını context: Legendary Entertainment, 2016 yılında bilimkurgu romanı Dune'un <hl> film ve TV haklarını <hl> satın aldı. Geliştirme kısa bir süre sonra başladı. Villeneuve projeye olan ilgisini dile getirdi ve resmi olarak yönetmen olarak imza attı. Roth ve Spaihts ile birlikte çalışarak senaryoyu iki bölüme ayırdı ve 1965 romanının 21. yüzyıla güncellenmiş bir uyarlamasını ekledi."
example_title: "Question Generation (Movie)"
- text: "answer: bir antlaşma yaparak context: Fatih Sultan Mehmet, Cenevizlilerin önemli üslerinden Amasra’yı aldı. 1479’da <hl> bir antlaşma yaparak <hl> Venedik'le 16 yıllık savaşa son verdi."
example_title: "Question Generation (History)"
- text: "answer: Venedik'le context: Cenevizlilerin önemli üslerinden Amasra’yı aldı. 1479’da bir antlaşma yaparak <hl> Venedik'le <hl> 16 yıllık savaşa sona verdi."
example_title: "Question Generation (History 2)"
- text: "extract answers: Cenevizlilerin önemli üslerinden Amasra’yı aldı. <hl> 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi. <hl>"
example_title: "Answer Extraction (History)"
- text: "question: Bu model ne ise yarar? context: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme / Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir."
example_title: "Answer Extraction (Open Domain)"
license: cc-by-4.0
---
# mt5-small for Turkish Question Generation
Automated question generation and question answering using text-to-text transformers by OBSS AI.
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-both-tquad2', qg_format='both')
```
## Citation 📜
```
@article{akyon2021automated,
title={Automated question generation and question answering from Turkish texts using text-to-text transformers},
author={Akyon, Fatih Cagatay and Cavusoglu, Devrim and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin},
journal={arXiv preprint arXiv:2111.06476},
year={2021}
}
```
## Overview ✔️
**Language model:** mt5-small
**Language:** Turkish
**Downstream-task:** Extractive QA/QG, Answer Extraction
**Training data:** TQuADv2-train
**Code:** https://github.com/obss/turkish-question-generation
**Paper:** https://arxiv.org/abs/2111.06476
## Hyperparameters
```
batch_size = 256
n_epochs = 15
base_LM_model = "mt5-small"
max_source_length = 512
max_target_length = 64
learning_rate = 1.0e-3
task_lisst = ["qa", "qg", "ans_ext"]
qg_format = "both"
```
## Performance
Refer to [paper](https://arxiv.org/abs/2111.06476).
## Usage 🔥
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-both-tquad2', qg_format='both')
context = """
Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır.
Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap
üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme
/ Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir.
Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.
"""
# a) Fully Automated Question Generation
generation_api(task='question-generation', context=context)
# b) Question Answering
question = "Bu model ne işe yarar?"
generation_api(task='question-answering', context=context, question=question)
# b) Answer Extraction
generation_api(task='answer-extraction', context=context)
```
|
ontocord/wav2vec2-large-xlsr-vietnamese | e32a063462d12803e438ce9961707ade0dc9d368 | 2021-03-28T23:57:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ontocord | null | ontocord/wav2vec2-large-xlsr-vietnamese | 3 | null | transformers | 21,632 | ---
language: vi
datasets:
- common_voice
- FOSD: https://data.mendeley.com/datasets/k9sxg2twv4/4
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Vietnamese by Ontocord
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 42.403315
---
# Ontocord/Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "vi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ontocord/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("ontocord/wav2vec2-large-xlsr-53-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ontocord/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("ontocord/wav2vec2-large-xlsr-vietnamese")
model.to("cuda")
chars_to_ignore_regex = '[\\\+\@\ǀ\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# you may also want to use the decode_string from https://huggingface.co/Nhut/wav2vec2-large-xlsr-vietnamese
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 42.403315
## Training
The Common Voice train, validation, and FPT datasets were used for training.
The script used for training can be found here # TODO
|
orendar/longformer-prize | c4a52bbe0729b137c51c870d882675b56dde6bad | 2021-12-19T20:11:10.000Z | [
"pytorch",
"longformer",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | orendar | null | orendar/longformer-prize | 3 | null | transformers | 21,633 | Entry not found |
osanseviero/test_adapters | d6b2f9b4eada7d07955ac1f00044dac4e84dfc21 | 2021-06-16T20:03:55.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | osanseviero | null | osanseviero/test_adapters | 3 | null | transformers | 21,634 | Entry not found |
oseibrefo/distilbert-base-uncased-finetuned-cola | 24e6cae9f034fc220c94d809480eb9b7ab6abf41 | 2021-12-19T19:40:54.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | oseibrefo | null | oseibrefo/distilbert-base-uncased-finetuned-cola | 3 | null | transformers | 21,635 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5497693861041112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7595
- Matthews Correlation: 0.5498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5275 | 1.0 | 535 | 0.5411 | 0.4254 |
| 0.3498 | 2.0 | 1070 | 0.4973 | 0.5183 |
| 0.2377 | 3.0 | 1605 | 0.6180 | 0.5079 |
| 0.175 | 4.0 | 2140 | 0.7595 | 0.5498 |
| 0.1322 | 5.0 | 2675 | 0.8412 | 0.5370 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
pablouribe/bertstem-copus | 6bb804171e01b415f8b14ed0fad0d50cd61b0390 | 2022-01-20T21:01:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pablouribe | null | pablouribe/bertstem-copus | 3 | null | transformers | 21,636 | Entry not found |
pablouribe/beto-copus | 6da16e23e51063e5f7824aca06e30ca2205dbf25 | 2022-01-21T17:16:20.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pablouribe | null | pablouribe/beto-copus | 3 | null | transformers | 21,637 | Entry not found |
pakupoko/bizlin-distil-model | 4c2b022349eb3ecc4f65303113172f41f6d8efb4 | 2020-11-29T07:58:15.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | pakupoko | null | pakupoko/bizlin-distil-model | 3 | null | transformers | 21,638 | Entry not found |
panashe/autonlp-eo-590516680 | 44e48ed99f782e91db482e2f0955573361cc9527 | 2022-02-23T11:29:10.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:panashe/autonlp-data-eo",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | panashe | null | panashe/autonlp-eo-590516680 | 3 | null | transformers | 21,639 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- panashe/autonlp-data-eo
co2_eq_emissions: 2.3709499644854883
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 590516680
- CO2 Emissions (in grams): 2.3709499644854883
## Validation Metrics
- Loss: 0.6466107964515686
- Accuracy: 0.6608695652173913
- Precision: 0.6515151515151515
- Recall: 0.7288135593220338
- AUC: 0.6334745762711864
- F1: 0.688
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/panashe/autonlp-eo-590516680
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("panashe/autonlp-eo-590516680", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("panashe/autonlp-eo-590516680", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
patrickvonplaten/bigbird-roberta-large | a6fe5266ed1cbb52fb55703bcf647b63eb717352 | 2021-03-22T12:56:14.000Z | [
"pytorch",
"big_bird",
"pretraining",
"transformers"
] | null | false | patrickvonplaten | null | patrickvonplaten/bigbird-roberta-large | 3 | null | transformers | 21,640 | Entry not found |
patrickvonplaten/hubert-librispeech-clean-100h-demo-dist | 004baa9ccdbe25720f47ce52f7a0fc52219300a6 | 2021-12-20T12:53:35.000Z | [
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"transformers",
"speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/hubert-librispeech-clean-100h-demo-dist | 3 | 1 | transformers | 21,641 | ---
license: apache-2.0
tags:
- speech-recognition
- librispeech_asr
- generated_from_trainer
model-index:
- name: hubert-librispeech-clean-100h-demo-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-librispeech-clean-100h-demo-dist
This model is a fine-tuned version of [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0984
- Wer: 0.0883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9031 | 0.11 | 100 | 2.9220 | 1.0 |
| 2.6437 | 0.22 | 200 | 2.6268 | 1.0 |
| 0.3934 | 0.34 | 300 | 0.4860 | 0.4182 |
| 0.3531 | 0.45 | 400 | 0.3088 | 0.2894 |
| 0.2255 | 0.56 | 500 | 0.2568 | 0.2426 |
| 0.3379 | 0.67 | 600 | 0.2073 | 0.2011 |
| 0.2419 | 0.78 | 700 | 0.1849 | 0.1838 |
| 0.2128 | 0.9 | 800 | 0.1662 | 0.1690 |
| 0.1341 | 1.01 | 900 | 0.1600 | 0.1541 |
| 0.0946 | 1.12 | 1000 | 0.1431 | 0.1404 |
| 0.1643 | 1.23 | 1100 | 0.1373 | 0.1304 |
| 0.0663 | 1.35 | 1200 | 0.1293 | 0.1307 |
| 0.162 | 1.46 | 1300 | 0.1247 | 0.1266 |
| 0.1433 | 1.57 | 1400 | 0.1246 | 0.1262 |
| 0.1581 | 1.68 | 1500 | 0.1219 | 0.1154 |
| 0.1036 | 1.79 | 1600 | 0.1127 | 0.1081 |
| 0.1352 | 1.91 | 1700 | 0.1087 | 0.1040 |
| 0.0471 | 2.02 | 1800 | 0.1085 | 0.1005 |
| 0.0945 | 2.13 | 1900 | 0.1066 | 0.0973 |
| 0.0843 | 2.24 | 2000 | 0.1102 | 0.0964 |
| 0.0774 | 2.35 | 2100 | 0.1079 | 0.0940 |
| 0.0952 | 2.47 | 2200 | 0.1056 | 0.0927 |
| 0.0635 | 2.58 | 2300 | 0.1026 | 0.0920 |
| 0.0665 | 2.69 | 2400 | 0.1012 | 0.0905 |
| 0.034 | 2.8 | 2500 | 0.1009 | 0.0900 |
| 0.0251 | 2.91 | 2600 | 0.0993 | 0.0883 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
patrickvonplaten/prophetnet-large-uncased_old | 48043c600ad68bb47add183d448537a2dd174556 | 2020-10-16T12:37:59.000Z | [
"pytorch",
"prophetnet",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | patrickvonplaten | null | patrickvonplaten/prophetnet-large-uncased_old | 3 | null | transformers | 21,642 | Entry not found |
patrickvonplaten/rag-tiny-random | c7879b142047d145218337f323d826c53c11ef9a | 2020-09-18T08:34:42.000Z | [
"pytorch",
"rag",
"transformers"
] | null | false | patrickvonplaten | null | patrickvonplaten/rag-tiny-random | 3 | null | transformers | 21,643 | Entry not found |
patrickvonplaten/realm-open-qa | b2b7d8d2e1c0ab86a1db6cc1cc2544a55d2bf9e2 | 2022-01-03T12:12:47.000Z | [
"pytorch",
"realm",
"transformers"
] | null | false | patrickvonplaten | null | patrickvonplaten/realm-open-qa | 3 | null | transformers | 21,644 | Entry not found |
patrickvonplaten/sew-d-tiny-100k-demo-colab | 166296d6da76c357a0eed8822944a2530e63d53e | 2021-10-20T12:15:23.000Z | [
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/sew-d-tiny-100k-demo-colab | 3 | null | transformers | 21,645 | Entry not found |
patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist | 14589df69be2e6daa385dc12e46513b23d269e9b | 2021-12-20T12:53:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist | 3 | null | transformers | 21,646 | ---
license: apache-2.0
tags:
- speech-recognition
- librispeech_asr
- generated_from_trainer
model-index:
- name: wav2vec2-librispeech-clean-100h-demo-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-librispeech-clean-100h-demo-dist
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0572
- Wer: 0.0417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.399 | 0.11 | 100 | 3.6153 | 1.0 |
| 2.8892 | 0.22 | 200 | 2.8963 | 1.0 |
| 2.8284 | 0.34 | 300 | 2.8574 | 1.0 |
| 0.7347 | 0.45 | 400 | 0.6158 | 0.4850 |
| 0.1138 | 0.56 | 500 | 0.2038 | 0.1560 |
| 0.248 | 0.67 | 600 | 0.1274 | 0.1024 |
| 0.2586 | 0.78 | 700 | 0.1108 | 0.0876 |
| 0.0733 | 0.9 | 800 | 0.0936 | 0.0762 |
| 0.044 | 1.01 | 900 | 0.0834 | 0.0662 |
| 0.0393 | 1.12 | 1000 | 0.0792 | 0.0622 |
| 0.0941 | 1.23 | 1100 | 0.0769 | 0.0627 |
| 0.036 | 1.35 | 1200 | 0.0731 | 0.0603 |
| 0.0768 | 1.46 | 1300 | 0.0713 | 0.0559 |
| 0.0518 | 1.57 | 1400 | 0.0686 | 0.0537 |
| 0.0815 | 1.68 | 1500 | 0.0639 | 0.0515 |
| 0.0603 | 1.79 | 1600 | 0.0636 | 0.0500 |
| 0.056 | 1.91 | 1700 | 0.0609 | 0.0480 |
| 0.0265 | 2.02 | 1800 | 0.0621 | 0.0465 |
| 0.0496 | 2.13 | 1900 | 0.0607 | 0.0449 |
| 0.0436 | 2.24 | 2000 | 0.0591 | 0.0446 |
| 0.0421 | 2.35 | 2100 | 0.0590 | 0.0428 |
| 0.0641 | 2.47 | 2200 | 0.0603 | 0.0443 |
| 0.0466 | 2.58 | 2300 | 0.0580 | 0.0429 |
| 0.0132 | 2.69 | 2400 | 0.0574 | 0.0423 |
| 0.0073 | 2.8 | 2500 | 0.0586 | 0.0417 |
| 0.0021 | 2.91 | 2600 | 0.0574 | 0.0412 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
patrickvonplaten/xls-r-300m-it-phoneme | 4033f9914ee133308ed931069d2daea013507ef7 | 2021-12-21T11:15:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"mozilla-foundation/common_voice_3_0",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/xls-r-300m-it-phoneme | 3 | null | transformers | 21,647 | ---
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_3_0
- generated_from_trainer
model-index:
- name: xls-r-300m-it-phoneme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-it-phoneme
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the mozilla-foundation/common_voice_3_0 - IT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3899
- Wer: 0.0770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000075
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/xprophetnet-large-wiki100-cased-xglue-qg_old | 489a1dc41197a7386d89990dc47e749ad5113684 | 2020-10-16T13:18:43.000Z | [
"pytorch",
"xlm-prophetnet",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | patrickvonplaten | null | patrickvonplaten/xprophetnet-large-wiki100-cased-xglue-qg_old | 3 | null | transformers | 21,648 | Entry not found |
pcuenq/wav2vec2-large-xlsr-53-es | 9d57df54d6ec7d0cdde69eb9abc089cb7433438b | 2021-03-28T19:06:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pcuenq | null | pcuenq/wav2vec2-large-xlsr-53-es | 3 | null | transformers | 21,649 | ---
language: es
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Spanish by pcuenq
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice es
type: common_voice
args: es
metrics:
- name: Test WER
type: wer
value: 10.50
---
# Wav2Vec2-Large-XLSR-53-Spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset{s}.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "es", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es")
model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Spanish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "es", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es")
model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es")
model.to("cuda")
## Text pre-processing
chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]'
chars_to_ignore_pattern = re.compile(chars_to_ignore_regex)
def remove_special_characters(batch):
batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " "
return batch
def replace_diacritics(batch):
sentence = batch["sentence"]
sentence = re.sub('ì', 'í', sentence)
sentence = re.sub('ù', 'ú', sentence)
sentence = re.sub('ò', 'ó', sentence)
sentence = re.sub('à', 'á', sentence)
batch["sentence"] = sentence
return batch
def replace_additional(batch):
sentence = batch["sentence"]
sentence = re.sub('ã', 'a', sentence) # Portuguese, as in São Paulo
sentence = re.sub('ō', 'o', sentence) # Japanese
sentence = re.sub('ê', 'e', sentence) # Português
batch["sentence"] = sentence
return batch
## Audio pre-processing
# I tried to perform the resampling using a `torchaudio` `Resampler` transform,
# but found that the process deadlocked when using multiple processes.
# Perhaps my torchaudio is using the wrong sox library under the hood, I'm not sure.
# Fortunately, `librosa` seems to work fine, so that's what I'll use for now.
import librosa
def speech_file_to_array_fn(batch):
speech_array, sample_rate = torchaudio.load(batch["path"])
batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000)
return batch
# One-pass mapping function
# Text transformation and audio resampling
def cv_prepare(batch):
batch = remove_special_characters(batch)
batch = replace_diacritics(batch)
batch = replace_additional(batch)
batch = speech_file_to_array_fn(batch)
return batch
# Number of CPUs or None
num_proc = 16
test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
# WER Metric computation
# `wer.compute` crashes in my computer with more than ~10000 samples.
# Until I confirm in a different one, I created a "chunked" version of the computation.
# It gives the same results as `wer.compute` for smaller datasets.
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000)))
#print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 10.50 %
## Text processing
The Common Voice `es` dataset has a lot of characters that don't belong to the Spanish language, even after discarding separators and punctuators. I made some translations and discarded most of the extraneous characters.
I decided to keep all the Spanish language diacritics. This is a difficult decision. Some times the diacritics are added just because of ortography rules, but they don't alter the meaning of the word. In other cases, however, the diacritics carry meaning, as they disambiguate among different senses. A better WER score would surely have been achieved using just the non-accented characters, and the resulting text would be understood by Spanish speakers. Nevertheless, I think keeping them is "more correct".
All the rules I applied are shown in the evaluation script.
## Training
The Common Voice `train` and `validation` datasets were used for training.
For dataset handling reasons, I initially split `train`+`validation` in 10% splits so I could see progress earlier and react if needed.
* I trained for 30 epochs on the first split only, using similar values as the ones proposed by Patrick in his demo notebook. I used a batch_size of 24 with 2 gradient accumulation steps. This gave a WER of about 16.3%on the full test set.
* I then trained the resulting model on the 9 remaining splits, for 3 epochs each, but with a faster warmup of 75 steps.
* Next, I trained 3 epochs on each of the 10 splits using a smaller learning rate of `1e-4`. A warmup of 75 steps was used in this case too. The final model had a WER of about 11.7%.
* By this time we had already figured out the reason for the initial delay in training time, and I decided to use the full dataset for training. However, in my tests I had seen that varying the learning rate seemed to work well, so I wanted to replicate that. I selected a cosine schedule with hard restarts, a reference learning rate of `3e-5` and 10 epochs. I configured the cosine schedule to have 10 cycles too, and used no warmup. This produced a WER of ~10.5%.
## Other things I tried
* Starting from the same fine-tuned model, I compared a constant lr of 1e-4 against a linear schedule with warmup. The linear schedule worked better (11.85 vs 12.72 WER%).
* I tried to use a Spanish model to improve a Basque one. I transformed the text to make ortography more similar to the target language, but the Basque model did not improve.
* Label smoothing did not work.
## Issues and other technical challenges
I had previously used the `transformers` library as an end user, just to try Bert on some tasks, but this is the first time I have needed to look into the code.
* The `Datasets` abstraction is great because, being based on memory-mapped files, it allows arbitrarily-sized datasets to be processed. However, it is important to understand its limitations and trade-offs. I found caching convenient, but disk usage explodes fast. I keep the datasets for my current projects in a 1 TB, fast SSD disk, and a couple of times I ran out of space. I had to understand how cache files are stored and learn when it's best to disable caching and manually save when you need to. I found that data exploration is better suited for smaller datasets or sampled ones, but actual processing is most efficient when you have identified the transformations you need and apply them in a single `map` operation.
* There was a noticeable delay before training started. Fortunately, we found the reason why, discussed it in Slack and the forums and created a workaround.
* The WER metric crashed on large datasets. I evaluated on a small sample (also, it's faster) and wrote an accumulative version of wer that runs on fixed memory. I'd like to verify whether this change makes sense to be used inside the training loop.
* `torchaudio` deadlocks when using multiple processes. `librosa` works fine. To be investigated.
* When using `num_proc` inside a notebook, I could not see progress bars. This is surely some permissions issue in my computer. I still need to find it out.
|
pediberto/autonlp-testing-504313966 | 5229617f98e2cdf2547a799c2dae2165c476ce5f | 2022-01-15T15:02:13.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:pediberto/autonlp-data-testing",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | pediberto | null | pediberto/autonlp-testing-504313966 | 3 | null | transformers | 21,650 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- pediberto/autonlp-data-testing
co2_eq_emissions: 12.994518654810642
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 504313966
- CO2 Emissions (in grams): 12.994518654810642
## Validation Metrics
- Loss: 0.19673296809196472
- Accuracy: 0.9398032027783138
- Precision: 0.9133115705476967
- Recall: 0.9718255499807025
- AUC: 0.985316873222122
- F1: 0.9416604338070308
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/pediberto/autonlp-testing-504313966
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pediberto/autonlp-testing-504313966", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pediberto/autonlp-testing-504313966", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
pedropei/live-demo-question-intimacy | 3626e45b657e4642c1fb5624df721f5e130510e2 | 2021-05-20T19:23:45.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | pedropei | null | pedropei/live-demo-question-intimacy | 3 | 1 | transformers | 21,651 | Entry not found |
pere/xls-test | ae9e562d93b635b54b66f981cc84679f3fdf9ea4 | 2022-01-22T18:40:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | pere | null | pere/xls-test | 3 | null | transformers | 21,652 | ---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
persiannlp/mt5-small-parsinlu-arc-comqa-obqa-multiple-choice | 2e826cafd1b94956590889ca46708a6877c6b0f0 | 2021-09-23T16:20:31.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:commonsenseqa",
"dataset:arc",
"dataset:openbookqa",
"transformers",
"multiple-choice",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-small-parsinlu-arc-comqa-obqa-multiple-choice | 3 | null | transformers | 21,653 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- commonsenseqa
- arc
- openbookqa
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
petabyte/unang_mang_bert | 7c17f5123b05ceea24611857bd7004f0ce99353f | 2021-09-22T09:33:40.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"Tagalog",
"dataset:OSCAR tl",
"transformers",
"Mang Bert",
"license:apache-2.0"
] | feature-extraction | false | petabyte | null | petabyte/unang_mang_bert | 3 | null | transformers | 21,654 | ---
language:
- Tagalog
thumbnail:
tags:
- Tagalog
- Mang Bert
license: apache-2.0
datasets:
- OSCAR tl
---
# Mang Bert
## Model description
Fine-Tuned Roberta Model using RobertaForMaskedLM
Tagalog Dataset from OSCAR tl
## Training data
458206 text dataset from OSCAR
|
phailyoor/distilbert-base-uncased-finetuned-yahd-twval-hptune | d8025c17e9f302ffab1ff02f6e7d22399cd857c6 | 2021-11-15T02:50:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | phailyoor | null | phailyoor/distilbert-base-uncased-finetuned-yahd-twval-hptune | 3 | null | transformers | 21,655 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-yahd-twval-hptune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-yahd-twval-hptune
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3727
- Accuracy: 0.2039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.1638 | 1.0 | 10106 | 2.1944 | 0.3646 |
| 1.7982 | 2.0 | 20212 | 2.6390 | 0.3333 |
| 1.3279 | 3.0 | 30318 | 3.1526 | 0.3095 |
| 0.8637 | 4.0 | 40424 | 4.8368 | 0.2470 |
| 0.5727 | 5.0 | 50530 | 6.3727 | 0.2039 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
phantomcoder1996/wav2vec2-large-xls-r-300m-arabic-colab | dd730bc65aa3893965f7bb4718c8bb69348dc5b4 | 2022-03-23T18:30:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | phantomcoder1996 | null | phantomcoder1996/wav2vec2-large-xls-r-300m-arabic-colab | 3 | null | transformers | 21,656 | ---
language:
- ar
thumbnail: wav2vec2-large-xls-r fine tuned on common voice data for Modern Standard
Arabic
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- WER
model-index:
- name: wav2vec2-large-xls-r-300m-arabic-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: ar
metrics:
- name: Test WER
type: wer
value: 64.38
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: 96.15
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ar
metrics:
- name: Test WER
type: wer
value: 94.96
---
|
philschmid/bert-mini-sst2-distilled | d1e81c8d1d7c053acdc47e58e40a504063df4b3a | 2022-01-31T23:34:03.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | philschmid | null | philschmid/bert-mini-sst2-distilled | 3 | null | transformers | 21,657 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-mini-sst2-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.856651376146789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-mini-sst2-distilled
This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1792
- Accuracy: 0.8567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00021185586235152412
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1552 | 1.0 | 66 | 1.4847 | 0.8349 |
| 0.8451 | 2.0 | 132 | 1.3495 | 0.8624 |
| 0.5864 | 3.0 | 198 | 1.2257 | 0.8532 |
| 0.4553 | 4.0 | 264 | 1.2571 | 0.8544 |
| 0.3708 | 5.0 | 330 | 1.2132 | 0.8658 |
| 0.3086 | 6.0 | 396 | 1.2370 | 0.8589 |
| 0.2701 | 7.0 | 462 | 1.1900 | 0.8635 |
| 0.246 | 8.0 | 528 | 1.1792 | 0.8567 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
phongdtd/wavLM-VLSP-vi-large | 723c8e2187ed9d780a559b729ce36d75c31d2b0c | 2022-02-22T04:34:39.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | phongdtd | null | phongdtd/wavLM-VLSP-vi-large | 3 | null | transformers | 21,658 | Entry not found |
pietrotrope/hate_trained | 8c315a247448d609504de2db079203bd46c90eec | 2021-12-11T01:00:50.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | pietrotrope | null | pietrotrope/hate_trained | 3 | null | transformers | 21,659 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: hate_trained
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7730369969869401
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9661
- F1: 0.7730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.303025140957233e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4767 | 1.0 | 2250 | 0.5334 | 0.7717 |
| 0.4342 | 2.0 | 4500 | 0.7633 | 0.7627 |
| 0.3813 | 3.0 | 6750 | 0.9452 | 0.7614 |
| 0.3118 | 4.0 | 9000 | 0.9661 | 0.7730 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
pinecone/bert-reader-squad2 | cec3d9f6cca1a41b9c04f4789b47338e6096525a | 2022-01-17T15:59:37.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | pinecone | null | pinecone/bert-reader-squad2 | 3 | null | transformers | 21,660 | Entry not found |
piotr-rybak/poleval2021-task4-herbert-large-encoder | 0339799f1ea39db2e7249c891a32ba3c6180da97 | 2021-09-23T17:34:47.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | piotr-rybak | null | piotr-rybak/poleval2021-task4-herbert-large-encoder | 3 | null | sentence-transformers | 21,661 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6098 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3049,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 1024, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pparasurama/racBERT-race-pretrained | cc908f1bf37079726bf3072394b37c3b824c17ff | 2021-11-09T20:32:03.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pparasurama | null | pparasurama/racBERT-race-pretrained | 3 | null | transformers | 21,662 | Entry not found |
prajjwal1/bert_small | 48ff64f8ec69a4ea05847ee8aa173b1546db660b | 2021-10-05T18:00:33.000Z | [
"pytorch",
"arxiv:2110.01518",
"transformers"
] | null | false | prajjwal1 | null | prajjwal1/bert_small | 3 | null | transformers | 21,663 | If you use the model, please consider citing the paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). |
prajjwal1/ctrl_discovery_11 | 9de249f827da86598664eb8677b6a4531b59eb7f | 2021-05-16T17:09:21.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_11 | 3 | null | transformers | 21,664 | Entry not found |
prajjwal1/ctrl_discovery_2 | 293ff2f17e3df9404f1a8045b1d7d276e2b7b510 | 2021-03-05T16:07:16.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_2 | 3 | null | transformers | 21,665 | Entry not found |
prajjwal1/roberta_hellaswag | fc420a61ff9972ac4e077d0bc0b3bf14ef9402a4 | 2021-05-28T22:28:13.000Z | [
"pytorch",
"roberta",
"multiple-choice",
"dataset:hellaswag",
"transformers",
"commonsense-reasoning",
"sentence-completion"
] | multiple-choice | false | prajjwal1 | null | prajjwal1/roberta_hellaswag | 3 | null | transformers | 21,666 | ---
tags:
- pytorch
- commonsense-reasoning
- sentence-completion
datasets:
- hellaswag
---
`RoBERTa` trained on HellaSwag dataset (`MultipleChoiceModel`). HellaSwag has a multiple choice questions format.
It gets around 74.99% accuracy.
[@prajjwal_1](https://twitter.com/prajjwal_1/)
|
pranav1015/distilbert-base-uncased-finetuned-cola | 1342bc169f5b9eb82c636663c0797e14f6d57af1 | 2021-07-30T05:27:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | pranav1015 | null | pranav1015/distilbert-base-uncased-finetuned-cola | 3 | null | transformers | 21,667 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model_index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metric:
name: Matthews Correlation
type: matthews_correlation
value: 0.520875943143754
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8486
- Matthews Correlation: 0.5209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5265 | 1.0 | 535 | 0.5479 | 0.4049 |
| 0.3571 | 2.0 | 1070 | 0.5002 | 0.5164 |
| 0.2432 | 3.0 | 1605 | 0.6242 | 0.5091 |
| 0.173 | 4.0 | 2140 | 0.7559 | 0.5120 |
| 0.1352 | 5.0 | 2675 | 0.8486 | 0.5209 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
prao/bert-base-cased-tweet-sentiment | b68b5b70a0bc4b232ddb24f2e11a09e18d550297 | 2021-12-23T01:31:57.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | prao | null | prao/bert-base-cased-tweet-sentiment | 3 | null | transformers | 21,668 | Entry not found |
princeton-nlp/datamux-mnli-2 | 1987f1c772f731a6e71f3de42682d01ac57edf48 | 2022-02-16T16:52:11.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-mnli-2 | 3 | null | transformers | 21,669 | Entry not found |
princeton-nlp/datamux-mnli-5 | 63c1abbe976c9f62e50c328e4fbac89258ad39b1 | 2022-02-16T16:53:13.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-mnli-5 | 3 | null | transformers | 21,670 | Entry not found |
princeton-nlp/datamux-retrieval-2 | e46e1f118262660d7a866b308f18a3cdf7770fc1 | 2022-02-18T03:50:01.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-retrieval-2 | 3 | null | transformers | 21,671 | Entry not found |
princeton-nlp/datamux-retrieval-20 | 994fa2b028dd509bcc5cf95cfe7b8c1c5c145777 | 2022-02-18T03:54:46.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-retrieval-20 | 3 | null | transformers | 21,672 | Entry not found |
princeton-nlp/densephrases-multi-query-tqa | 7041d91e17b3afbc1a490b06e26e65a5b9297782 | 2021-09-20T21:42:29.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/densephrases-multi-query-tqa | 3 | null | transformers | 21,673 | Entry not found |
pritamdeka/PubMedBert-abstract-cord19 | 6aff6f5d1f93d8b3b742cc416897fb3755841805 | 2022-02-03T23:18:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:pritamdeka/cord-19-abstract",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | pritamdeka | null | pritamdeka/PubMedBert-abstract-cord19 | 3 | null | transformers | 21,674 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- pritamdeka/cord-19-abstract
model-index:
- name: PubMedBert-abstract-cord19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedbert-abstract-cord19
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [pritamdeka/cord-19-abstract](https://huggingface.co/datasets/pritamdeka/cord-19-abstract) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.3774 | 0.15 | 5000 | 1.3212 |
| 1.3937 | 0.29 | 10000 | 1.4059 |
| 1.6812 | 0.44 | 15000 | 1.6174 |
| 1.4712 | 0.59 | 20000 | 1.4383 |
| 1.4293 | 0.73 | 25000 | 1.4356 |
| 1.4155 | 0.88 | 30000 | 1.4283 |
| 1.3963 | 1.03 | 35000 | 1.4135 |
| 1.3718 | 1.18 | 40000 | 1.3948 |
| 1.369 | 1.32 | 45000 | 1.3961 |
| 1.354 | 1.47 | 50000 | 1.3788 |
| 1.3399 | 1.62 | 55000 | 1.3866 |
| 1.3289 | 1.76 | 60000 | 1.3630 |
| 1.3155 | 1.91 | 65000 | 1.3609 |
| 1.2976 | 2.06 | 70000 | 1.3489 |
| 1.2783 | 2.2 | 75000 | 1.3333 |
| 1.2696 | 2.35 | 80000 | 1.3260 |
| 1.2607 | 2.5 | 85000 | 1.3232 |
| 1.2547 | 2.64 | 90000 | 1.3034 |
| 1.2495 | 2.79 | 95000 | 1.3035 |
| 1.2404 | 2.94 | 100000 | 1.3029 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
pritoms/gpt-neo-125M-finetuned-pgt | a56ca653cb0e4562ed28aa8e65d882db02010d09 | 2021-09-07T08:20:52.000Z | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | pritoms | null | pritoms/gpt-neo-125M-finetuned-pgt | 3 | null | transformers | 21,675 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: gpt-neo-125M-finetuned-pgt
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-finetuned-pgt
This model is a fine-tuned version of [pritoms/gpt-neo-125M-finetuned-pgt](https://huggingface.co/pritoms/gpt-neo-125M-finetuned-pgt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 26 | 1.5947 |
| No log | 2.0 | 52 | 1.5963 |
| No log | 3.0 | 78 | 1.6026 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ptro/model1_test | e97b6d53ea83d738a93c2b59d529eef417f65def | 2021-11-30T15:25:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index"
] | text-classification | false | ptro | null | ptro/model1_test | 3 | 1 | transformers | 21,676 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: model1_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model1_test
This model is a fine-tuned version of [DaNLP/da-bert-hatespeech-detection](https://huggingface.co/DaNLP/da-bert-hatespeech-detection) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1816
- Accuracy: 0.9667
- F1: 0.3548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 150 | 0.1128 | 0.9667 | 0.2 |
| No log | 2.0 | 300 | 0.1666 | 0.9684 | 0.2963 |
| No log | 3.0 | 450 | 0.1816 | 0.9667 | 0.3548 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
puri/puri-thai-albert-cased-v1 | 2adc97b14a2f4a5dd93ab41dfaa68ca18369f7f3 | 2020-11-15T06:43:20.000Z | [
"pytorch",
"tf",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | puri | null | puri/puri-thai-albert-cased-v1 | 3 | null | transformers | 21,677 | Entry not found |
pzelasko/longformer-swda-nolower | dd4c38e58ab07291979f8f0b279e57da9d372f78 | 2022-02-13T01:42:35.000Z | [
"pytorch",
"longformer",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | pzelasko | null | pzelasko/longformer-swda-nolower | 3 | null | transformers | 21,678 | Entry not found |
qarib/bert-base-qarib_far | b81ea4c27361557e9b798e23e5ec3a12b2d3bf50 | 2021-04-04T07:59:36.000Z | [
"pytorch",
"ar",
"dataset:arabic_billion_words",
"dataset:open_subtitles",
"dataset:twitter",
"dataset:Farasa",
"arxiv:2102.10684",
"transformers",
"tf",
"QARiB",
"qarib"
] | null | false | qarib | null | qarib/bert-base-qarib_far | 3 | null | transformers | 21,679 | ---
language: ar
tags:
- pytorch
- tf
- QARiB
- qarib
datasets:
- arabic_billion_words
- open_subtitles
- twitter
- Farasa
metrics:
- f1
widget:
- text: "و+قام ال+مدير [MASK]"
---
# QARiB: QCRI Arabic and Dialectal BERT
## About QARiB Farasa
QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text.
For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from
[Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/).
QARiB: Is the Arabic name for "Boat".
## Model and Parameters:
- Data size: 14B tokens
- Vocabulary: 64k
- Iterations: 10M
- Number of Layers: 12
## Training QARiB
See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md)
## Using QARiB
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md)
This model expects the data to be segmented. You may use [Farasa Segmenter](https://farasa-api.qcri.org/segmentation/) API.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>>from transformers import pipeline
>>>fill_mask = pipeline("fill-mask", model="./models/bert-base-qarib_far")
>>> fill_mask("و+قام ال+مدير [MASK]")
>>> fill_mask("و+قام+ت ال+مدير+ة [MASK]")
>>> fill_mask("قللي وشفيييك يرحم [MASK]")
```
## Evaluations:
## Model Weights and Vocab Download
From Huggingface site: https://huggingface.co/qarib/bert-base-qarib_far
## Contacts
Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih
## Reference
```
@article{abdelali2021pretraining,
title={Pre-Training BERT on Arabic Tweets: Practical Considerations},
author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih},
year={2021},
eprint={2102.10684},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
quangtran199hust/layoutlmv2_roige | 7d4b05529c11bef740546cc4c965e11e8b10a88f | 2021-10-28T07:32:00.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | quangtran199hust | null | quangtran199hust/layoutlmv2_roige | 3 | null | transformers | 21,680 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2_roige
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2_roige
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 1.14.0
- Tokenizers 0.10.3
|
racai/distilbert-base-romanian-uncased | 687c97b3f415326343cf90008020f8a4abc1ed62 | 2021-12-24T17:36:39.000Z | [
"pytorch",
"tf",
"jax",
"distilbert",
"ro",
"dataset:oscar",
"dataset:wikipedia",
"arxiv:2112.12650",
"transformers",
"license:mit"
] | null | false | racai | null | racai/distilbert-base-romanian-uncased | 3 | null | transformers | 21,681 | ---
language: ro
license: mit
datasets:
- oscar
- wikipedia
---
# Romanian DistilBERT
This repository contains the uncased Romanian DistilBERT (named Distil-RoBERT-base in the paper). The teacher model used for distillation is: [readerbench/RoBERT-base](https://huggingface.co/readerbench/RoBERT-base).
The model was introduced in [this paper](https://arxiv.org/abs/2112.12650). The adjacent code can be found
[here](https://github.com/racai-ai/Romanian-DistilBERT).
## Usage
```python
from transformers import AutoTokenizer, AutoModel
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained("racai/distilbert-base-romanian-uncased")
model = AutoModel.from_pretrained("racai/distilbert-base-romanian-uncased")
# tokenize a test sentence
input_ids = tokenizer.encode("aceasta este o propoziție de test.", add_special_tokens=True, return_tensors="pt")
# run the tokens trough the model
outputs = model(input_ids)
print(outputs)
```
## Model Size
It is 35% smaller than its teacher `RoBERT-base`.
| Model | Size (MB) | Params (Millions) |
|--------------------------------|:---------:|:----------------:|
| RoBERT-base | 441 | 114 |
| distilbert-base-romanian-cased | 282 | 72 |
## Evaluation
We evaluated the model in comparison with the RoBERT-base on 5 Romanian tasks:
- **UPOS**: Universal Part of Speech (F1-macro)
- **XPOS**: Extended Part of Speech (F1-macro)
- **NER**: Named Entity Recognition (F1-macro)
- **SAPN**: Sentiment Anlaysis - Positive vs Negative (Accuracy)
- **SAR**: Sentiment Analysis - Rating (F1-macro)
- **DI**: Dialect identification (F1-macro)
- **STS**: Semantic Textual Similarity (Pearson)
| Model | UPOS | XPOS | NER | SAPN | SAR | DI | STS |
|--------------------------------|:----:|:----:|:---:|:----:|:---:|:--:|:---:|
| RoBERT-base | 98.02 | 97.15 | 85.14 | 98.30 | 79.40 | 96.07 | 81.18 |
| distilbert-base-romanian-uncased | 97.12 | 95.79 | 83.11 | 98.01 | 79.58 | 96.11 | 79.80 |
### BibTeX entry and citation info
```bibtex
@article{avram2021distilling,
title={Distilling the Knowledge of Romanian BERTs Using Multiple Teachers},
author={Andrei-Marius Avram and Darius Catrina and Dumitru-Clementin Cercel and Mihai Dascălu and Traian Rebedea and Vasile Păiş and Dan Tufiş},
journal={ArXiv},
year={2021},
volume={abs/2112.12650}
}
``` |
ramonzaca/roberto | e531d40361b0b99ae2419e9bb61656fcc1440210 | 2021-05-20T19:51:19.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ramonzaca | null | ramonzaca/roberto | 3 | null | transformers | 21,682 | Entry not found |
ran/c10 | 012966eeef9e2432fdad94dd4e914ed9dc73eea6 | 2021-05-20T03:54:23.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ran | null | ran/c10 | 3 | null | transformers | 21,683 | Entry not found |
ran/y7 | f2745160cf9083fb512ed3f01cec88a9ba3e74d2 | 2021-05-20T03:58:46.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ran | null | ran/y7 | 3 | null | transformers | 21,684 | Entry not found |
rbawden/diacritic_restoration_fr | a8e9fd2902f840b149763860f00aa60ba6cd436a | 2022-04-15T14:56:21.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | rbawden | null | rbawden/diacritic_restoration_fr | 3 | null | transformers | 21,685 | Entry not found |
redadmiral/headline-test | 168569e060eb0b293972111f2b81f6f490a78074 | 2021-12-29T01:43:08.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"de",
"dataset:redadmiral/autonlp-data-Headline-Generator",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | redadmiral | null | redadmiral/headline-test | 3 | null | transformers | 21,686 | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP 🤗"
datasets:
- redadmiral/autonlp-data-Headline-Generator
co2_eq_emissions: 651.3545590912366
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 453611714
- CO2 Emissions (in grams): 651.3545590912366
## Validation Metrics
- Loss: nan
- Rouge1: 2.8187
- Rouge2: 0.5508
- RougeL: 2.7396
- RougeLsum: 2.7446
- Gen Len: 9.7507
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/redadmiral/autonlp-Headline-Generator-453611714
``` |
redwoodresearch/classifier-18aug-train | 00ae35aa99ac3a56cb5a5f94cab4b7720a1614eb | 2021-09-21T17:03:24.000Z | [
"pytorch",
"deberta",
"text-classification",
"transformers"
] | text-classification | false | redwoodresearch | null | redwoodresearch/classifier-18aug-train | 3 | null | transformers | 21,687 | Entry not found |
reichenbach/wav2vec2-large-xls-r-300m-as | 9ac11d0b1de7a3fc2267b03e1ce400ca5ec272a3 | 2022-03-24T11:58:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"as",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | reichenbach | null | reichenbach/wav2vec2-large-xls-r-300m-as | 3 | null | transformers | 21,688 | ---
license: apache-2.0
language:
- as
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-as
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8318
- Wer: 0.5174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.882 | 25.0 | 400 | 1.2290 | 0.8182 |
| 0.8275 | 50.0 | 800 | 0.6835 | 0.5398 |
| 0.337 | 75.0 | 1200 | 0.7789 | 0.5107 |
| 0.2113 | 100.0 | 1600 | 0.8318 | 0.5174 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
### Test Evaluation
Common Voice Assamese Test Set (v7.0)
- WER: 0.7224
- CER: 0.2882 |
researchaccount/sa_sub1 | d939c605f04dc890974b907f6ce59c10170501b7 | 2021-05-20T04:20:12.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers"
] | text-classification | false | researchaccount | null | researchaccount/sa_sub1 | 3 | null | transformers | 21,689 | ---
language: en
widget:
- text: "USER USER USER USER لاحول ولاقوه الا بالله 💔 💔 💔 💔 HASH TAG متي يصدر قرار العشرين ! ! ! ! ! !"
---
Sub 1 |
researchaccount/sa_sub5 | bbb440eb95413b26de062501baf81b420acf91e4 | 2021-05-20T04:26:03.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | researchaccount | null | researchaccount/sa_sub5 | 3 | null | transformers | 21,690 | Entry not found |
rexxar96/autonlp-roberta-large-finetuned-467612250 | 11859f8153ddbff57790a73e42f51bb31a10e34e | 2022-01-03T14:24:32.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:rexxar96/autonlp-data-roberta-large-finetuned",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | rexxar96 | null | rexxar96/autonlp-roberta-large-finetuned-467612250 | 3 | null | transformers | 21,691 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- rexxar96/autonlp-data-roberta-large-finetuned
co2_eq_emissions: 73.72876780772296
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 467612250
- CO2 Emissions (in grams): 73.72876780772296
## Validation Metrics
- Loss: 0.18261319398880005
- Accuracy: 0.9541659567217584
- Precision: 0.9530625832223701
- Recall: 0.9572049481778669
- AUC: 0.9901737875196123
- F1: 0.9551292743953294
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/rexxar96/autonlp-roberta-large-finetuned-467612250
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rexxar96/autonlp-roberta-large-finetuned-467612250", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rexxar96/autonlp-roberta-large-finetuned-467612250", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
ricardo-filho/sbertimbau-large-quora-multitask | 51f9d657bcab712c6cf9ee8def4d790bb7fe0041 | 2021-08-18T06:02:15.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ricardo-filho | null | ricardo-filho/sbertimbau-large-quora-multitask | 3 | null | sentence-transformers | 21,692 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8605 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11553 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
rizvandwiki/seq_classifier_model | 60b80889b139715306c16b73bd1f974bb371c8a8 | 2021-08-12T02:51:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | rizvandwiki | null | rizvandwiki/seq_classifier_model | 3 | null | transformers | 21,693 | Entry not found |
robkayinto/xlm-roberta-base-finetuned-panx-de | 5cb9fdcc9e79b531f99ea59059cf3874c9618b0b | 2022-07-13T17:10:32.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | robkayinto | null | robkayinto/xlm-roberta-base-finetuned-panx-de | 3 | null | transformers | 21,694 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.863677639046538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- F1: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 |
| 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 |
| 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rohansingh/autonlp-Fake-news-detection-system-29906863 | f4d20a7e87e68b448a47f328246eb5909b825789 | 2021-11-06T12:24:22.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"hi",
"dataset:rohansingh/autonlp-data-Fake-news-detection-system",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | rohansingh | null | rohansingh/autonlp-Fake-news-detection-system-29906863 | 3 | null | transformers | 21,695 | ---
tags: autonlp
language: hi
widget:
- text: "I love AutoNLP 🤗"
datasets:
- rohansingh/autonlp-data-Fake-news-detection-system
co2_eq_emissions: 3.8624397961432106
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 29906863
- CO2 Emissions (in grams): 3.8624397961432106
## Validation Metrics
- Loss: 0.2536192238330841
- Accuracy: 0.9084807809640024
- Precision: 0.9421172886519421
- Recall: 0.9435545385202135
- AUC: 0.9517288050454876
- F1: 0.9428353658536586
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/rohansingh/autonlp-Fake-news-detection-system-29906863
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rohansingh/autonlp-Fake-news-detection-system-29906863", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rohansingh/autonlp-Fake-news-detection-system-29906863", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
roschmid/my-first-model | f0e1814f332624453d3ccace8e5ba87821750078 | 2022-02-23T11:01:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | roschmid | null | roschmid/my-first-model | 3 | null | transformers | 21,696 | Entry not found |
rossanez/opus-mt-finetuned-en-es | 969e23ce8b52e24c08635b64a997355e1f0ce465 | 2021-11-29T22:50:12.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:opus_books",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/opus-mt-finetuned-en-es | 3 | null | transformers | 21,697 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: opus-mt-finetuned-en-es
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
args: en-es
metrics:
- name: Bleu
type: bleu
value: 21.5636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-finetuned-en-es
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9813
- Bleu: 21.5636
- Gen Len: 30.0992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 2.09 | 1.0 | 4382 | 1.9813 | 21.5636 | 30.0992 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-256-wd-01 | 2f58291f30a074af50decac70d7dab855ac8fb13 | 2021-12-01T00:48:47.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-256-wd-01 | 3 | null | transformers | 21,698 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256-wd-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-wd-01
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1202 | 7.5964 | 17.3996 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ruiqi-zhong/roberta-base-meta-tuning-test | 42f2321a589bf95d6d173a36285f709c5d28bcb7 | 2021-09-15T02:40:42.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | ruiqi-zhong | null | ruiqi-zhong/roberta-base-meta-tuning-test | 3 | null | transformers | 21,699 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.