modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sultan/BioM-ALBERT-xxlarge-PMC | 047499f199be4e57c5dd131a355914131d9c9669 | 2021-10-12T21:24:20.000Z | [
"pytorch",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sultan | null | sultan/BioM-ALBERT-xxlarge-PMC | 0 | 1 | transformers | 36,100 | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model was pre-trained on PMC full article for further 64k steps with a batch size of 8192, where we initiate our weights from our model BioM-ALBERT-xxlarge. Thus, the total training steps for this model is 264k+64K=328K steps. The model is very large due to the number of hidden layer size (4096). In order to help researchers with limited resources to fine-tune larger models, we created an example with PyTorch XLA. PyTorch XLA (https://github.com/pytorch/xla) is a library that allows you to use PyTorch on TPU units, which is provided for free by Google Colab and Kaggle. Follow this example to work with PyTorch/XLA [Link](https://github.com/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb). In this example we achieve 80.74 micro F1 score on ChemProt task with BioM-ALBERTxxlarge . Fine-tuning takes 43 minutes for 5 epochs .
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints. We also updated this repo with a couple of examples on how to fine-tune LMs on text classification and questions answering tasks such as ChemProt, SQuAD, and BioASQ.
# Colab Notebook Examples
BioM-ELECTRA-LARGE on NER and ChemProt Task [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_NER_and_ChemProt_Task_on_TPU.ipynb)
BioM-ELECTRA-Large on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ELECTRA_Large_on_TPU.ipynb)
BioM-ALBERT-xxlarge on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb)
Text Classification Task With HuggingFace Transformers and PyTorchXLA on Free TPU [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
summaria/qa-qg-t5 | 6f728ef29967afed215928834452016a1d3205a7 | 2021-07-08T03:33:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | summaria | null | summaria/qa-qg-t5 | 0 | null | transformers | 36,101 | Entry not found |
summaria/qa-t5 | d49e0508c1a9feb1e5c7d3cc182714d72398a97d | 2021-07-08T05:27:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | summaria | null | summaria/qa-t5 | 0 | null | transformers | 36,102 | Entry not found |
sunhao666/chi-sina | 616f37a556fef0821cbff3788c3d340c2842c759 | 2021-06-04T06:43:10.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | sunhao666 | null | sunhao666/chi-sina | 0 | null | transformers | 36,103 | Entry not found |
sunitha/Roberta_Custom_Squad_DS | 214beb56a4bdc41df96f2721e7795a3026a128a4 | 2022-02-17T18:00:36.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/Roberta_Custom_Squad_DS | 0 | null | transformers | 36,104 | Entry not found |
sunitha/Trial_3_Results | 7c2b76614298a13fb97f964c7cbfee9d6b15b21c | 2022-02-05T19:27:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/Trial_3_Results | 0 | null | transformers | 36,105 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: Trial_3_Results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial_3_Results
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
sunitha/config_distilbert_model | b581e9479015875f7f498d74862461c4df792bb4 | 2022-02-16T05:56:14.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sunitha | null | sunitha/config_distilbert_model | 0 | null | transformers | 36,106 | Entry not found |
supah-hakah/distilgpt2-finetuned-wikitext2 | df74e52f9e1092fbc170241c9f84810120df218c | 2021-08-19T12:59:37.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-generation | false | supah-hakah | null | supah-hakah/distilgpt2-finetuned-wikitext2 | 0 | null | transformers | 36,107 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: distilgpt2-finetuned-wikitext2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7598 | 1.0 | 2334 | 3.6654 |
| 3.6321 | 2.0 | 4668 | 3.6453 |
| 3.6076 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
superb-test-user/distilbert-base-uncased-finetuned-squad-d5716d28 | 58b7b06afd1d8d562b4ab12f3f10ff268d7c579a | 2021-09-30T18:04:02.000Z | [
"pytorch",
"en",
"dataset:squad",
"arxiv:1910.01108",
"question-answering",
"license:apache-2.0"
] | question-answering | false | superb-test-user | null | superb-test-user/distilbert-base-uncased-finetuned-squad-d5716d28 | 0 | null | null | 36,108 | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
suwani/distilbert-base-uncased-finetuned-ner | fc55273ae479a03be76a0e00edbe41ddce1b76b1 | 2021-09-29T08:22:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | suwani | null | suwani/distilbert-base-uncased-finetuned-ner | 0 | null | transformers | 36,109 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2787
- Precision: 0.6403
- Recall: 0.6929
- F1: 0.6655
- Accuracy: 0.9100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3360 | 0.5596 | 0.5992 | 0.5788 | 0.8956 |
| 0.4686 | 2.0 | 576 | 0.2901 | 0.6061 | 0.7231 | 0.6594 | 0.9063 |
| 0.4686 | 3.0 | 864 | 0.2787 | 0.6403 | 0.6929 | 0.6655 | 0.9100 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
sv/gpt2-finetuned-nft-shakes | bd8bf83cea2742e6423364a5cf6279821fa51e69 | 2021-09-06T16:59:11.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | sv | null | sv/gpt2-finetuned-nft-shakes | 0 | null | transformers | 36,110 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: gpt2-finetuned-nft-shakes
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-nft-shakes
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 306 | 3.9679 |
| 4.2957 | 2.0 | 612 | 3.7979 |
| 4.2957 | 3.0 | 918 | 3.7566 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
svanhvit/XLMR-ENIS-finetuned-conll_ner | a6026b5240de8a1ad1b905b3b877151f62096642 | 2021-10-08T15:14:21.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | svanhvit | null | svanhvit/XLMR-ENIS-finetuned-conll_ner | 0 | null | transformers | 36,111 | ---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-conll_ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8754622097322882
- name: Recall
type: recall
value: 0.8425622775800712
- name: F1
type: f1
value: 0.8586972290729725
- name: Accuracy
type: accuracy
value: 0.9860744627305035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-conll_ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0713
- Precision: 0.8755
- Recall: 0.8426
- F1: 0.8587
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0493 | 1.0 | 2904 | 0.0673 | 0.8588 | 0.8114 | 0.8344 | 0.9841 |
| 0.0277 | 2.0 | 5808 | 0.0620 | 0.8735 | 0.8275 | 0.8499 | 0.9855 |
| 0.0159 | 3.0 | 8712 | 0.0713 | 0.8755 | 0.8426 | 0.8587 | 0.9861 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
svanhvit/XLMR-ENIS-finetuned-ner-finetuned-conll_ner | c33f6f4678e06f2d0765b397cab676e6a7b73fdc | 2021-10-08T13:38:38.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | svanhvit | null | svanhvit/XLMR-ENIS-finetuned-ner-finetuned-conll_ner | 0 | null | transformers | 36,112 | ---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-ner-finetuned-conll_ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8720365189221028
- name: Recall
type: recall
value: 0.8429893238434164
- name: F1
type: f1
value: 0.8572669368847712
- name: Accuracy
type: accuracy
value: 0.9857922913838598
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner-finetuned-conll_ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS-finetuned-ner](https://huggingface.co/vesteinn/XLMR-ENIS-finetuned-ner) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0770
- Precision: 0.8720
- Recall: 0.8430
- F1: 0.8573
- Accuracy: 0.9858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0461 | 1.0 | 2904 | 0.0647 | 0.8588 | 0.8107 | 0.8341 | 0.9842 |
| 0.0244 | 2.0 | 5808 | 0.0704 | 0.8691 | 0.8296 | 0.8489 | 0.9849 |
| 0.0132 | 3.0 | 8712 | 0.0770 | 0.8720 | 0.8430 | 0.8573 | 0.9858 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
sven-nm/roberta_classics_ner | 5ad6c9015b146d1bbf281b1eb41c260ca739b945 | 2022-03-18T10:14:20.000Z | [
"pytorch",
"roberta",
"token-classification",
"en",
"transformers",
"classics",
"citation mining",
"autotrain_compatible"
] | token-classification | false | sven-nm | null | sven-nm/roberta_classics_ner | 0 | null | transformers | 36,113 | ---
language:
- en
tags:
- classics
- citation mining
widget:
- text: "Homer's Iliad opens with an invocation to the muse (1. 1)."
---
### Model and entities
`roberta_classics_ner` is a domain-specific RoBERTa-based model for named entity recognition in Classical Studies. It recognises bibliographical entities, such as:
| id | label | desciption | Example |
| --- | ------------- | ------------------------------------------- | --------------------- |
| 0 | 'O' | Out of entity | |
| 1 | 'B-AAUTHOR' | Ancient authors | *Herodotus* |
| 2 | 'I-AAUTHOR' | | |
| 3 | 'B-AWORK' | The title of an ancient work | *Symposium*, *Aeneid* |
| 4 | 'I-AWORK' | | |
| 5 | 'B-REFAUWORK' | A structured reference to an ancient work | *Homer, Il.* |
| 6 | 'I-REFAUWORK' | | |
| 7 | 'B-REFSCOPE' | The scope of a reference | *II.1.993a30–b11* |
| 8 | 'I-REFSCOPE' | | |
| 9 | 'B-FRAGREF' | A reference to fragmentary texts or scholia | *Frag. 19. West* |
| 10 | 'I-FRAGREF' | | |
### Example
```
B-AAUTHOR B-AWORK B-REFSCOPE
Homer 's Iliad opens with an invocation to the muse ( 1. 1).
```
### Dataset
`roberta_classics_ner` was fine-tuned and evaluated on `EpiBau`, a dataset which has not been released publicly yet. It is composed of four volumes of [Structures of Epic Poetry](https://www.epische-bauformen.uni-rostock.de/), a compendium on the narrative patterns and structural elements in ancient epic.
Entity counts of the `Epibau` dataset are the following:
| | train-set | dev-set | test-set |
| -------------- | --------- | ------- | -------- |
| word count | 712462 | 125729 | 122324 |
| AAUTHOR | 4436 | 1368 | 1511 |
| AWORK | 3145 | 780 | 670 |
| REFAUWORK | 5102 | 988 | 1209 |
| REFSCOPE | 14768 | 3193 | 2847 |
| FRAGREF | 266 | 29 | 33 |
| total entities | 13822 | 1415 | 2419 |
### Results
The model was developed in the context of experiments reported [here](http://infoscience.epfl.ch/record/291236?&ln=en).Trained and tested on `EpiBau` with a 85-15 split, the model yields a general F1 score of **.82** (micro-averages). Detailed scores are displayed below. Evaluation was performed with the [CLEF-HIPE-scorer](https://github.com/impresso/CLEF-HIPE-2020-scorer), in strict mode)
| metric | AAUTHOR | AWORK | REFSCOPE | REFAUWORK |
| --------- | ------- | ----- | -------- | --------- |
| F1 | .819 | .796 | .863 | .756 |
| Precision | .842 | .818 | .860 | .755 |
| Recall | .797 | .766 | .756 | .866 |
Questions, remarks, help or contribution ? Get in touch [here](https://github.com/AjaxMultiCommentary), we'll be happy to chat !
|
swapnil165/DialoGPT-small-Rick | a9af2357ee48f435019f9395daed3a5ec187b498 | 2021-10-12T02:33:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | swapnil165 | null | swapnil165/DialoGPT-small-Rick | 0 | null | transformers | 36,114 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
swcrazyfan/KingJamesify-T5-base-lm-adapt | 9408ad17b96775c762a963a76fde26b43a712e1e | 2022-02-21T04:33:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | swcrazyfan | null | swcrazyfan/KingJamesify-T5-base-lm-adapt | 0 | null | transformers | 36,115 | ---
license: apache-2.0
---
|
swcrazyfan/KingJamesify-T5-large | d2076e140acada81673e207666376436576d0f93 | 2022-03-02T10:53:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | swcrazyfan | null | swcrazyfan/KingJamesify-T5-large | 0 | null | transformers | 36,116 | ---
license: apache-2.0
---
This model was fine-tuned to “translate” any English text into 17th-century style English.
The name comes from the dataset used for fine-tuning. Namely, modern Bible text as input and and the famous King James Bible as the output.
To test, use “kingify: “ at the beginning of anything you want to translate.
Generally, it does a good job and phrases, concepts, and vocabulary that may appear in the Bible. If not, the will likely just modify the grammar and other words while leaving the word with an unknown 17th-century equivalent. |
swcrazyfan/TB-125M | 458bb8f18ac1f475c81e8c8e81203995dc845f98 | 2021-07-03T03:37:21.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TB-125M | 0 | null | transformers | 36,117 | Entry not found |
swcrazyfan/TE-v3-3K | 3675a9c478bf64d9046e2d3baf89558ef0d0e9e6 | 2021-05-28T06:38:28.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TE-v3-3K | 0 | null | transformers | 36,118 | Entry not found |
swcrazyfan/TE-v3-8K | eaf75b2d5973501dce9f6ca38613d68617dfb09a | 2021-05-28T12:26:43.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TE-v3-8K | 0 | null | transformers | 36,119 | Entry not found |
swcrazyfan/TEFL-2.7B-10K | 2067c796d6e9e7b7296c66c2a9c55647b5ea32cd | 2021-06-10T03:25:02.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TEFL-2.7B-10K | 0 | null | transformers | 36,120 | Entry not found |
swcrazyfan/TEFL-2.7B-15K | 0a1ed9d8dbe4db525f27102833bb5ae687756f49 | 2021-06-10T09:20:21.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TEFL-2.7B-15K | 0 | null | transformers | 36,121 | Entry not found |
swcrazyfan/TEFL-2.7B-4K | 12c27deb7942c456789c80f1567e35a743329dc6 | 2021-06-04T15:58:19.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/TEFL-2.7B-4K | 0 | null | transformers | 36,122 | Entry not found |
swcrazyfan/gpt-neo-1.3B-TBL | 1777406e4594978eb2b5807649002b4534bd58ea | 2021-05-21T05:43:27.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | swcrazyfan | null | swcrazyfan/gpt-neo-1.3B-TBL | 0 | null | transformers | 36,123 | Entry not found |
sybk/highkick-soonjae-v2 | e40379c7be0cdbf13b63563bb7fc4c436b85628c | 2021-05-31T04:23:02.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | false | sybk | null | sybk/highkick-soonjae-v2 | 0 | null | transformers | 36,124 | Entry not found |
sybk/highkick-soonjae | 7a5e8be132f5c14f8ed0102a44abf7bcda9c0ae6 | 2021-05-23T14:38:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | sybk | null | sybk/highkick-soonjae | 0 | null | transformers | 36,125 | Entry not found |
sybk/hk-backward | 7c02e09ef972133e5b055f7c6a575563415d77d2 | 2021-05-23T14:41:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | sybk | null | sybk/hk-backward | 0 | null | transformers | 36,126 | Entry not found |
sybk/hk_backward_v2 | 47d1a99334519c2223fd710c19d13ff60fa0e8e3 | 2021-05-31T04:17:16.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | false | sybk | null | sybk/hk_backward_v2 | 0 | null | transformers | 36,127 | Entry not found |
tabo/checkpoint-500-finetuned-squad | 65a5245195d9ac4df0f8d21976ba6e37a0128d1d | 2021-12-14T09:40:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | tabo | null | tabo/checkpoint-500-finetuned-squad | 0 | null | transformers | 36,128 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: checkpoint-500-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoint-500-finetuned-squad
This model was trained from scratch on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tadejmagajna/flair-sl-pos | ba815bf66da987b021803b26d4245c0012bfba8e | 2022-01-05T15:07:06.000Z | [
"pytorch",
"sl",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | tadejmagajna | null | tadejmagajna/flair-sl-pos | 0 | null | flair | 36,129 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: sl
widget:
- text: "Danes je lep dan."
---
## Slovene Part-of-speech (PoS) Tagging for Flair
This is a Slovene part-of-speech (PoS) tagger trained on the [Slovenian UD Treebank](https://github.com/UniversalDependencies/UD_Slovenian-SSJ) using Flair NLP framework.
The tagger is trained using a combination of forward Slovene contextual string embeddings, backward Slovene contextual string embeddings and classic Slovene FastText embeddings.
F-score (micro): **94,96**
The model is trained on a large (500+) number of different tags that described at [https://universaldependencies.org/tagset-conversion/sl-multext-uposf.html](https://universaldependencies.org/tagset-conversion/sl-multext-uposf.html).
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("tadejmagajna/flair-sl-pos")
# make example sentence
sentence = Sentence("Danes je lep dan.")
# predict PoS tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted PoS spans
print('The following PoS tags are found:')
# iterate over parts of speech and print
for tag in sentence.get_spans('pos'):
print(tag)
```
This prints out the following output:
```
Sentence: "Danes je lep dan ." [− Tokens: 5 − Token-Labels: "Danes <Rgp> je <Va-r3s-n> lep <Agpmsnn> dan <Ncmsn> . <Z>"]
The following PoS tags are found:
Span [1]: "Danes" [− Labels: Rgp (1.0)]
Span [2]: "je" [− Labels: Va-r3s-n (1.0)]
Span [3]: "lep" [− Labels: Agpmsnn (0.9999)]
Span [4]: "dan" [− Labels: Ncmsn (1.0)]
Span [5]: "." [− Labels: Z (1.0)]
```
---
### Training: Script to train this model
The following standard Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import UD_SLOVENIAN
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = UD_SLOVENIAN()
# 2. what tag do we want to predict?
tag_type = 'pos'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize embeddings
embedding_types = [
WordEmbeddings('sl'),
FlairEmbeddings('sl-forward'),
FlairEmbeddings('sl-backward'),
]
embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger: SequenceTagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer: ModelTrainer = ModelTrainer(tagger, corpus)
# 7. start training
trainer.train('resources/taggers/pos-slovene',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/). |
tal-yifat/injury-report-test | b4eb74dd2fb31972315092083fd96e3c73936d77 | 2022-01-18T16:24:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | tal-yifat | null | tal-yifat/injury-report-test | 0 | null | transformers | 36,130 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: injury-report-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# injury-report-test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8158 | 1.0 | 6633 | 1.7368 |
| 1.6984 | 2.0 | 13266 | 1.6198 |
| 1.6209 | 3.0 | 19899 | 1.5800 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tanmayplanet32/english-model | 0c97244e7c9c9dcc99c1ae63773f15fb9621788b | 2021-08-18T16:48:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | tanmayplanet32 | null | tanmayplanet32/english-model | 0 | null | transformers | 36,131 |
# Wav2vec2-Large-English
Fine-tuned [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on English using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
|
tareknaous/bart-daily-dialog | 764f80cb4d63a591099aeda84cc0083324316341 | 2022-02-21T08:51:56.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tareknaous | null | tareknaous/bart-daily-dialog | 0 | null | transformers | 36,132 | Entry not found |
tau/splinter-large | 3d409d83a89d3e4989743e450001275891ceb22c | 2021-08-17T14:18:58.000Z | [
"pytorch",
"splinter",
"question-answering",
"en",
"arxiv:2108.05857",
"transformers",
"SplinterModel",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | tau | null | tau/splinter-large | 0 | null | transformers | 36,133 | ---
language: en
tags:
- splinter
- SplinterModel
license: apache-2.0
---
# Splinter large model
Splinter-large is the pretrained model discussed in the paper [Few-Shot Question Answering by Pretraining Span Selection](https://aclanthology.org/2021.acl-long.239/) (at ACL 2021). Its original repository can be found [here](https://github.com/oriram/splinter). The model is case-sensitive.
Note (1): This model **doesn't** contain the pretrained weights for the QASS layer (see paper for details), and therefore the QASS layer is randomly initialized upon loading it. For the model **with** those weights, see [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass).
Note (2): Splinter-large was trained after the paper was released, so the results are not reported. However, this model outperforms the base model by large margins. For example, on SQuAD, the model is able to reach 80% F1 given only 128 examples, whereas the base model obtains only ~73%). See the results for Splinter-large in the Appendix of [this paper](https://arxiv.org/pdf/2108.05857.pdf).
## Model description
Splinter is a model that is pretrained in a self-supervised fashion for few-shot question answering. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Recurring Span Selection (RSS) objective, which emulates the span selection process involved in extractive question answering. Given a text, clusters of recurring spans (n-grams that appear more than once in the text) are first identified. For each such cluster, all of its instances but one are replaced with a special `[QUESTION]` token, and the model should select the correct (i.e., unmasked) span for each masked one. The model also defines the Question-Aware Span selection (QASS) layer, which selects spans conditioned on a specific question (in order to perform multiple predictions).
## Intended uses & limitations
The prime use for this model is few-shot extractive QA.
## Pretraining
The model was pretrained on a v3-32 TPU for 2.4M steps. The training data is based on **Wikipedia** and **BookCorpus**. See the paper for more details.
### BibTeX entry and citation info
```bibtex
@inproceedings{ram-etal-2021-shot,
title = "Few-Shot Question Answering by Pretraining Span Selection",
author = "Ram, Ori and
Kirstain, Yuval and
Berant, Jonathan and
Globerson, Amir and
Levy, Omer",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.239",
doi = "10.18653/v1/2021.acl-long.239",
pages = "3066--3079",
}
``` |
teacookies/autonlp-more_fine_tune_24465520-26265897 | bb032bc40272a9143a0edb970a50360c9223a6f1 | 2021-10-25T09:21:10.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265897 | 0 | null | transformers | 36,134 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 81.7509252560808
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265897
- CO2 Emissions (in grams): 81.7509252560808
## Validation Metrics
- Loss: 0.5754176378250122
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265897
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265897", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265897", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265898 | 32151360ca95c771a61d4fd9477ba2aa19a793f7 | 2021-10-25T09:22:22.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265898 | 0 | null | transformers | 36,135 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 82.78379967029494
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265898
- CO2 Emissions (in grams): 82.78379967029494
## Validation Metrics
- Loss: 0.5732079148292542
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265898
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265898", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265898", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265899 | fe0b555762c07b69219b3549715004a36b78e6e6 | 2021-10-25T09:51:18.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265899 | 0 | null | transformers | 36,136 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 124.66009281731397
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265899
- CO2 Emissions (in grams): 124.66009281731397
## Validation Metrics
- Loss: 0.7011443972587585
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265899
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265899", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265899", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265900 | 3b4ddab0b5121464a518e434431a421f1a8806ac | 2021-10-25T09:51:20.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265900 | 0 | null | transformers | 36,137 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 123.16270720220912
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265900
- CO2 Emissions (in grams): 123.16270720220912
## Validation Metrics
- Loss: 0.6387976408004761
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265900
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265900", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265900", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265901 | 3a12985355f7301a14a69160049b9d31cb631d66 | 2021-10-25T09:21:03.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265901 | 0 | null | transformers | 36,138 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 80.04360178242067
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265901
- CO2 Emissions (in grams): 80.04360178242067
## Validation Metrics
- Loss: 0.5551259517669678
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265901
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265901", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265901", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265902 | f638248b2085ae4122ffd68dc0e59cbd29b27e75 | 2021-10-25T09:22:00.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265902 | 0 | null | transformers | 36,139 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 83.78453848505326
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265902
- CO2 Emissions (in grams): 83.78453848505326
## Validation Metrics
- Loss: 0.5470030903816223
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265902
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265902", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265902", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265905 | a54e425c9ccdfbda6bb5538c930afa79a40f7f95 | 2021-10-25T09:32:48.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265905 | 0 | null | transformers | 36,140 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 103.35758036182682
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265905
- CO2 Emissions (in grams): 103.35758036182682
## Validation Metrics
- Loss: 0.5223112106323242
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265905
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265905", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265905", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265906 | bb232483ee29b78f2de8f5022bfece3173c3cd60 | 2021-10-25T09:22:17.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265906 | 0 | null | transformers | 36,141 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 83.00580438705762
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265906
- CO2 Emissions (in grams): 83.00580438705762
## Validation Metrics
- Loss: 0.5259918570518494
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265906
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265906", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265906", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265907 | 72cf012f02f4371b2bfb2cf479fedc2b0f7bc744 | 2021-10-25T09:35:36.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265907 | 0 | null | transformers | 36,142 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 103.5636883689371
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265907
- CO2 Emissions (in grams): 103.5636883689371
## Validation Metrics
- Loss: 0.6072460412979126
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265907
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265907", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265907", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265910 | 20b2e9f562f62d0737fc496bda40cdf69c1611c1 | 2021-10-25T09:21:45.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265910 | 0 | null | transformers | 36,143 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 77.64468929470678
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265910
- CO2 Emissions (in grams): 77.64468929470678
## Validation Metrics
- Loss: 5.950643062591553
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265910
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265910", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265910", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-more_fine_tune_24465520-26265911 | a65ed505282e289f26dad288537b36fff15b83ba | 2021-10-25T09:35:36.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-more_fine_tune_24465520",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-more_fine_tune_24465520-26265911 | 0 | null | transformers | 36,144 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-more_fine_tune_24465520
co2_eq_emissions: 97.58591836686978
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 26265911
- CO2 Emissions (in grams): 97.58591836686978
## Validation Metrics
- Loss: 6.2383246421813965
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-more_fine_tune_24465520-26265911
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265911", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-more_fine_tune_24465520-26265911", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465514 | d1619e096997b1f9d7e6f501ffc07289853c7931 | 2021-10-22T08:10:51.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465514 | 0 | null | transformers | 36,145 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 54.44076291568145
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465514
- CO2 Emissions (in grams): 54.44076291568145
## Validation Metrics
- Loss: 0.5786784887313843
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465514
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465514", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465514", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465515 | 695f210a49806aba360209a83d88c02c0546889c | 2021-10-22T08:11:45.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465515 | 0 | null | transformers | 36,146 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 56.45146749922553
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465515
- CO2 Emissions (in grams): 56.45146749922553
## Validation Metrics
- Loss: 0.5932255387306213
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465515
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465515", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465515", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465517 | 023cd2eb233fae9a0f0d32d2fdd03b50d99152db | 2021-10-22T08:13:41.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465517 | 0 | null | transformers | 36,147 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 54.75747617143382
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465517
- CO2 Emissions (in grams): 54.75747617143382
## Validation Metrics
- Loss: 0.6653227806091309
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465517
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465517", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465517", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465518 | 0461b9c8468eadc480518ed7f1cb4eb6d522c8bd | 2021-10-22T08:04:33.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465518 | 0 | null | transformers | 36,148 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 45.268576304018616
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465518
- CO2 Emissions (in grams): 45.268576304018616
## Validation Metrics
- Loss: 0.5742421746253967
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465518
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465518", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465518", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465520 | a309de3e4935a8eb401dd43c7e0534ff77120127 | 2021-10-22T08:13:49.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465520 | 0 | null | transformers | 36,149 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 57.56554511511173
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465520
- CO2 Emissions (in grams): 57.56554511511173
## Validation Metrics
- Loss: 0.6455457806587219
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465520
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465520", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465520", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465522 | 0f956f97426bf72a6fbf2d5f2cf7d93d39b62600 | 2021-10-22T08:05:40.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465522 | 0 | null | transformers | 36,150 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 44.450538076574766
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465522
- CO2 Emissions (in grams): 44.450538076574766
## Validation Metrics
- Loss: 0.5572742223739624
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465522
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465522", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465522", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teacookies/autonlp-roberta-base-squad2-24465524 | d28e4b6e2353ebcf3c5b3e77e61c70a4bfd94117 | 2021-10-22T08:14:00.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465524 | 0 | null | transformers | 36,151 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 58.51753681929935
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465524
- CO2 Emissions (in grams): 58.51753681929935
## Validation Metrics
- Loss: 0.5759999752044678
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465524
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465524", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465524", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
teleportHQ/predicto_tsx | 6986d6fc1571598e64c3f37a4e16bc9df864db05 | 2021-05-23T13:05:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | teleportHQ | null | teleportHQ/predicto_tsx | 0 | null | transformers | 36,152 | predicto css model
|
tennessejoyce/titlewave-t5-small | 2f07d369f98429e80bb53886855ec49a93819466 | 2021-03-09T04:03:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tennessejoyce | null | tennessejoyce/titlewave-t5-small | 0 | 1 | transformers | 36,153 | # Titlewave: t5-small
This is one of two models used in the Titlewave project. See https://github.com/tennessejoyce/TitleWave for more information.
This model was fine-tuned on a dataset of Stack Overflow posts, with a ConditionalGeneration head that summarizes the body of a question in order to suggest a title.
|
terri1102/wav2vec2-base-timit-demo-colab | 63fb562fb3947297c466236feeaab4a47d9ac6cf | 2021-10-29T20:57:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | terri1102 | null | terri1102/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 36,154 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4275
- Wer: 0.3380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.707 | 4.0 | 500 | 2.1164 | 1.0081 |
| 0.9098 | 8.0 | 1000 | 0.4767 | 0.4694 |
| 0.304 | 12.0 | 1500 | 0.4063 | 0.4007 |
| 0.1754 | 16.0 | 2000 | 0.4179 | 0.3640 |
| 0.1355 | 20.0 | 2500 | 0.4223 | 0.3585 |
| 0.1166 | 24.0 | 3000 | 0.4286 | 0.3457 |
| 0.0835 | 28.0 | 3500 | 0.4275 | 0.3380 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
testorg2/larger_fork | e04d38a7d68c60a7a95390045400a555127ab033 | 2021-11-02T09:42:38.000Z | [
"pytorch",
"bert",
"feature-extraction",
"multilingual",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | testorg2 | null | testorg2/larger_fork | 0 | null | sentence-transformers | 36,155 | ---
pipeline_tag: sentence-similarity
language: multilingual
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
thesamuelpena/Dialog-medium-Sonic | 07d41f5fc7bd2356b81cd5080f4a76b8f6943c23 | 2021-11-14T06:21:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | thesamuelpena | null | thesamuelpena/Dialog-medium-Sonic | 0 | null | transformers | 36,156 | ---
tags:
- conversational
---
#Sonic DialoGPT Model |
thingsu/koDPR_question | ae8cfb1aa3da47c61e607d404d622df3a4d8f8fa | 2021-05-24T02:47:00.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | thingsu | null | thingsu/koDPR_question | 0 | 3 | transformers | 36,157 | fintuned the kykim/bert-kor-base model as a dense passage retrieval context encoder by KLUE dataset
this link is experiment result. https://wandb.ai/thingsu/DenseRetrieval
Corpus : Korean Wikipedia Corpus
Trained Strategy :
- Pretrained Model : kykim/bert-kor-base
- Inverse Cloze Task : 16 Epoch, by korquad v 1.0, KLUE MRC dataset
- In-batch Negatives : 12 Epoch, by KLUE MRC dataset, random sampling between Sparse Retrieval(TF-IDF) top 100 passage per each query
You must need to use Korean wikipedia corpus
<pre>
<code>
from Transformers import AutoTokenizer, BertPreTrainedModel, BertModel
class BertEncoder(BertPreTrainedModel):
def __init__(self, config):
super(BertEncoder, self).__init__(config)
self.bert = BertModel(config)
self.init_weights()
def forward(self, input_ids, attention_mask=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask, token_type_ids)
pooled_output = outputs[1]
return pooled_output
model_name = 'kykim/bert-kor-base'
tokenizer = AutoTokenizer.from_pretrained(model_name)
q_encoder = BertEncoder.from_pretrained("thingsu/koDPR_question")
p_encoder = BertEncoder.from_pretrained("thingsu/koDPR_context")
</code>
</pre> |
thorduragust/IceBERT-finetuned-ner | 2b5c72ce3fbd3dfbd9baf2aa00181373eed43e30 | 2021-10-05T16:36:22.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | thorduragust | null | thorduragust/IceBERT-finetuned-ner | 0 | null | transformers | 36,158 | ---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: IceBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8948412698412699
- name: Recall
type: recall
value: 0.86222965706775
- name: F1
type: f1
value: 0.878232824195217
- name: Accuracy
type: accuracy
value: 0.9851596438314519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0787
- Precision: 0.8948
- Recall: 0.8622
- F1: 0.8782
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0526 | 1.0 | 2904 | 0.0746 | 0.8802 | 0.8539 | 0.8668 | 0.9836 |
| 0.0264 | 2.0 | 5808 | 0.0711 | 0.8777 | 0.8594 | 0.8684 | 0.9843 |
| 0.0161 | 3.0 | 8712 | 0.0787 | 0.8948 | 0.8622 | 0.8782 | 0.9852 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
threem/mysquadv2_8Jan22-finetuned-squad | 335fb8b9bb2da2f2c256c960bf5445ae5c79a224 | 2022-01-08T21:02:48.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | threem | null | threem/mysquadv2_8Jan22-finetuned-squad | 0 | null | transformers | 36,159 | Entry not found |
tiennvcs/bert-base-uncased-finetuned-infovqa | cf6ab4e7f56e3b93a2a91b782f153faa2d49270a | 2021-10-23T00:21:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tiennvcs | null | tiennvcs/bert-base-uncased-finetuned-infovqa | 0 | null | transformers | 36,160 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-infovqa
results:
- task:
name: Question Answering
type: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-infovqa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2765 | 0.23 | 1000 | 3.0678 |
| 2.9987 | 0.46 | 2000 | 2.9525 |
| 2.826 | 0.69 | 3000 | 2.7870 |
| 2.7084 | 0.93 | 4000 | 2.7051 |
| 2.1286 | 1.16 | 5000 | 2.9286 |
| 2.0009 | 1.39 | 6000 | 3.1037 |
| 2.0323 | 1.62 | 7000 | 2.8567 |
| 1.9905 | 1.85 | 8000 | 2.8276 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.8.0+cu101
- Datasets 1.11.0
- Tokenizers 0.10.3
|
tiennvcs/distilbert-base-uncased-finetuned-infovqa | 87d87c9534e45a152889f633979597abf2c14d89 | 2021-10-21T11:37:56.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tiennvcs | null | tiennvcs/distilbert-base-uncased-finetuned-infovqa | 0 | null | transformers | 36,161 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-infovqa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.02 | 100 | 4.7706 |
| No log | 0.05 | 200 | 4.4399 |
| No log | 0.07 | 300 | 3.8175 |
| No log | 0.09 | 400 | 3.8306 |
| 3.3071 | 0.12 | 500 | 3.6480 |
| 3.3071 | 0.14 | 600 | 3.6451 |
| 3.3071 | 0.16 | 700 | 3.4974 |
| 3.3071 | 0.19 | 800 | 3.4686 |
| 3.3071 | 0.21 | 900 | 3.4703 |
| 3.5336 | 0.23 | 1000 | 3.3165 |
| 3.5336 | 0.25 | 1100 | 3.3634 |
| 3.5336 | 0.28 | 1200 | 3.3466 |
| 3.5336 | 0.3 | 1300 | 3.3411 |
| 3.5336 | 0.32 | 1400 | 3.2456 |
| 3.3593 | 0.35 | 1500 | 3.3257 |
| 3.3593 | 0.37 | 1600 | 3.2941 |
| 3.3593 | 0.39 | 1700 | 3.2581 |
| 3.3593 | 0.42 | 1800 | 3.1680 |
| 3.3593 | 0.44 | 1900 | 3.2077 |
| 3.2436 | 0.46 | 2000 | 3.2422 |
| 3.2436 | 0.49 | 2100 | 3.2529 |
| 3.2436 | 0.51 | 2200 | 3.2681 |
| 3.2436 | 0.53 | 2300 | 3.1055 |
| 3.2436 | 0.56 | 2400 | 3.0174 |
| 3.093 | 0.58 | 2500 | 3.0608 |
| 3.093 | 0.6 | 2600 | 3.0200 |
| 3.093 | 0.63 | 2700 | 2.9884 |
| 3.093 | 0.65 | 2800 | 3.0041 |
| 3.093 | 0.67 | 2900 | 2.9700 |
| 3.0087 | 0.69 | 3000 | 3.0993 |
| 3.0087 | 0.72 | 3100 | 3.0499 |
| 3.0087 | 0.74 | 3200 | 2.9317 |
| 3.0087 | 0.76 | 3300 | 3.0817 |
| 3.0087 | 0.79 | 3400 | 3.0035 |
| 2.9694 | 0.81 | 3500 | 3.0850 |
| 2.9694 | 0.83 | 3600 | 2.9948 |
| 2.9694 | 0.86 | 3700 | 2.9874 |
| 2.9694 | 0.88 | 3800 | 2.9202 |
| 2.9694 | 0.9 | 3900 | 2.9322 |
| 2.8277 | 0.93 | 4000 | 2.9195 |
| 2.8277 | 0.95 | 4100 | 2.8638 |
| 2.8277 | 0.97 | 4200 | 2.8809 |
| 2.8277 | 1.0 | 4300 | 2.8872 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
timslams666/DialoGPT-small-rick | 36fd2b23a143133cd7e5cab48ac420a80a2f2687 | 2021-10-07T14:33:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | timslams666 | null | timslams666/DialoGPT-small-rick | 0 | null | transformers | 36,162 | ---
tags:
- conversational
---
# Rick Sanchez DialoGPT Model |
tingtingyuli/wav2vec2-base-timit-demo-colab | c3f7ac2753409bbb66f10c33fc63e02f486c9a89 | 2021-12-21T22:26:02.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tingtingyuli | null | tingtingyuli/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 36,163 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4371
- Wer: 0.3402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6515 | 4.0 | 500 | 1.9481 | 0.9825 |
| 0.8007 | 8.0 | 1000 | 0.4364 | 0.4424 |
| 0.2559 | 12.0 | 1500 | 0.4188 | 0.3848 |
| 0.1483 | 16.0 | 2000 | 0.4466 | 0.3524 |
| 0.1151 | 20.0 | 2500 | 0.4492 | 0.3519 |
| 0.0971 | 24.0 | 3000 | 0.4568 | 0.3453 |
| 0.0765 | 28.0 | 3500 | 0.4371 | 0.3402 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
tknmsn/hiro | 5e5fe8a1e31b1024d51b3e68cf0e63ae919b6014 | 2022-02-08T08:23:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | tknmsn | null | tknmsn/hiro | 0 | null | transformers | 36,164 | ---
license: mit
---
|
tli8hf/robertabase-crf-conll2012 | 80bae49f499b8d3816e2d6b2703146ddb64cfc38 | 2021-05-20T22:31:59.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | tli8hf | null | tli8hf/robertabase-crf-conll2012 | 0 | 1 | transformers | 36,165 | Entry not found |
tli8hf/robertabase_snli | 0028eca2f222b8bc7b8d61853ddb1db6e943dd7c | 2020-11-04T05:42:29.000Z | [
"pytorch",
"transformerfornli",
"transformers"
] | null | false | tli8hf | null | tli8hf/robertabase_snli | 0 | null | transformers | 36,166 | Entry not found |
tli8hf/unqover-bert-base-uncased-squad | cd14480340a8d9e2b097ffce060ad9a334dbc943 | 2021-05-20T07:54:17.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | tli8hf | null | tli8hf/unqover-bert-base-uncased-squad | 0 | null | transformers | 36,167 | Entry not found |
tli8hf/unqover-bert-large-uncased-newsqa | cbfe03a219f6721e4eca85b23b67e0668e346024 | 2021-05-20T07:56:02.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | tli8hf | null | tli8hf/unqover-bert-large-uncased-newsqa | 0 | null | transformers | 36,168 | Entry not found |
tli8hf/unqover-distilbert-base-uncased-newsqa | 3aacdcb349218a7c63828e8ff7c65b56a2f52ed3 | 2020-10-19T22:41:55.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | tli8hf | null | tli8hf/unqover-distilbert-base-uncased-newsqa | 0 | null | transformers | 36,169 | Entry not found |
tli8hf/unqover-roberta-base-newsqa | cdd1a598ff34e18e22ec252431056528430a7399 | 2021-05-20T22:33:16.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | tli8hf | null | tli8hf/unqover-roberta-base-newsqa | 0 | null | transformers | 36,170 | Entry not found |
tli8hf/unqover-roberta-base-squad | 6cd2c99694171feb4e5f4b730d8b7e99f2846dee | 2021-05-20T22:34:19.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | tli8hf | null | tli8hf/unqover-roberta-base-squad | 0 | null | transformers | 36,171 | Entry not found |
tlkh/code-byt5-large | dbf7ce17fc348f0b6f835a5816a2a59fa3485c5b | 2021-12-01T14:00:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tlkh | null | tlkh/code-byt5-large | 0 | null | transformers | 36,172 | Entry not found |
tlkh/program-synthesis-gpt-neo-1.3b | 50026849cfe13d5c2544471f2f6748501b16cbb7 | 2021-09-28T06:55:47.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | tlkh | null | tlkh/program-synthesis-gpt-neo-1.3b | 0 | null | transformers | 36,173 | Entry not found |
tlkh/t5_3B_fp16_untuned | 95a914516f02292649a910e54297861c0a7dbc99 | 2021-11-04T17:26:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tlkh | null | tlkh/t5_3B_fp16_untuned | 0 | null | transformers | 36,174 | Entry not found |
tlkh/t5_large_fp16_untuned | 7ed1f270fd8424de205141d2dfdf036074c02130 | 2021-11-04T14:07:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tlkh | null | tlkh/t5_large_fp16_untuned | 0 | null | transformers | 36,175 | Entry not found |
tmagajna/test | 674999ce57135b76dd75591f8f6f8e10ae96d9b0 | 2022-01-07T11:57:41.000Z | [
"pytorch",
"flair",
"token-classification"
] | token-classification | false | tmagajna | null | tmagajna/test | 0 | null | flair | 36,176 | ---
tags:
- flair
- token-classification
widget:
- text: "does this work"
---
## Test model |
tmills/clinical_tempeval_roberta-base | 2e194f49dc064fbabfc900590175090a7067e398 | 2022-03-24T03:34:16.000Z | [
"pytorch",
"cnlpt",
"transformers"
] | null | false | tmills | null | tmills/clinical_tempeval_roberta-base | 0 | null | transformers | 36,177 | Entry not found |
tngo/DialoGPT-small-HankHill | 9b0ab3a8cd5d3d0d17318c8e75c344e91ea99d25 | 2021-12-08T08:37:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | tngo | null | tngo/DialoGPT-small-HankHill | 0 | null | transformers | 36,178 | ---
tags:
- conversational
---
# Hank Hill ChatBot
This is an instance of microsoft/DialoGPT-small trained on a tv show character, Hank Hill from King of The Hill. The data comes from a csv file that contains character lines from the first 5 seasons of the show. Updated some portion of the data to accurately show Hank's famous pronunciation of the word "what" with "hwhat". Chat with the model:
## Issues
Occasionally the chatbot just responds with just multiple '!' characters. The chatbot frequently responds with "I'm not your buddy, pal" to uncomfortable/strange prompts/messages. Still working on a fix for those known issues.
```Python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("tngo/DialoGPT-small-HankHill")
model = AutoModelWithLMHead.from_pretrained("tngo/DialoGPT-small-HankHill")
# Let's chat for 4 lines
for step in range(4):
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
print("Hank Hill Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
tobiaslee/bert-6L-768H | 930fca29dc47c73f493584ed4f2fc22fe5aa1953 | 2021-05-20T08:00:41.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | tobiaslee | null | tobiaslee/bert-6L-768H | 0 | null | transformers | 36,179 | Entry not found |
tobiaslee/roberta-large-defteval-t6-st2 | 0af2060ce51896d14ae673562ffd7cef873b2c27 | 2021-06-27T08:16:59.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tobiaslee | null | tobiaslee/roberta-large-defteval-t6-st2 | 0 | null | transformers | 36,180 | Entry not found |
toiletwater/DialoGPT-medium-ironman | b1b2eca6f242dd97cf4eb812fb3a34fabbd04cf5 | 2021-11-27T03:00:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | toiletwater | null | toiletwater/DialoGPT-medium-ironman | 0 | null | transformers | 36,181 | ---
tags:
- conversational
---
# Tony Stark DialoGPT Model |
tom1804/hp_new | 39ffc04c2c446387376d97b1957f73ec672d9ec8 | 2021-06-20T15:38:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | tom1804 | null | tom1804/hp_new | 0 | null | transformers | 36,182 | ---
tags:
- conversational
---
# My Awesome Model |
tomascerejo12/DialoGPT-small-Rick | 081837a655b533c6d67bdf4ff98ba039601c7d30 | 2021-08-26T22:08:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | tomascerejo12 | null | tomascerejo12/DialoGPT-small-Rick | 0 | null | transformers | 36,183 | ---
tags:
- conversational
---
# Rick DialogPT Model |
tomato/electra-Question-answer | 0f100ca54d1922611ec1ff50a1a371a23bcac9e5 | 2021-06-03T18:52:15.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | tomato | null | tomato/electra-Question-answer | 0 | null | transformers | 36,184 | Entry not found |
tonoadisorn/wangchanberta-ner | c2bdaf73fd3886f87b6fc7d58adb42d7ffc8aa82 | 2022-02-15T07:04:11.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tonoadisorn | null | tonoadisorn/wangchanberta-ner | 0 | null | transformers | 36,185 | Entry not found |
tonyalves/wav2vec2-300m-teste4 | d5b303e79c01d50f6778b3bd202b972155de1bbf | 2022-01-09T22:57:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tonyalves | null | tonyalves/wav2vec2-300m-teste4 | 0 | null | transformers | 36,186 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-300m-teste4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-300m-teste4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3276
- Wer: 0.3489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.0237 | 0.49 | 100 | 4.2075 | 0.9792 |
| 3.313 | 0.98 | 200 | 3.0232 | 0.9792 |
| 2.9469 | 1.47 | 300 | 2.7591 | 0.9792 |
| 1.4217 | 1.96 | 400 | 0.8397 | 0.6219 |
| 0.5598 | 2.45 | 500 | 0.6085 | 0.5087 |
| 0.4507 | 2.94 | 600 | 0.4512 | 0.4317 |
| 0.2775 | 3.43 | 700 | 0.3839 | 0.3751 |
| 0.2047 | 3.92 | 800 | 0.3276 | 0.3489 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tpri/DialoGPT-small-pa | 93471bc777e03bc5312c8460bb5719fc04264ea6 | 2022-01-18T04:09:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | tpri | null | tpri/DialoGPT-small-pa | 0 | null | transformers | 36,187 | ---
tags:
- conversational
---
#Parry Bot DialoGPT Model |
trangdieu/roberta-large-retrained-2-epochs | 1b8f99085c06be7f7d43fa0f91914055b7b14bc7 | 2021-06-12T19:45:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | trangdieu | null | trangdieu/roberta-large-retrained-2-epochs | 0 | null | transformers | 36,188 | Entry not found |
trig/DialoGPT-small-harrypotter | a2bd94778a33984e9084e75bf76b829ca23386d4 | 2021-08-28T17:27:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | trig | null | trig/DialoGPT-small-harrypotter | 0 | null | transformers | 36,189 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
trig/sokka-chatbot-test | f12e574232aec91178bafa5d614353b9acabb64b | 2021-08-28T18:58:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | trig | null | trig/sokka-chatbot-test | 0 | null | transformers | 36,190 | ---
tags:
- conversational
---
# chatbot test with sokka from atla |
trisongz/biobert_large_cased | 153aeff7de5a41c0cf3ca597c5e3c3bb2f7d1280 | 2020-04-29T21:35:30.000Z | [
"pytorch",
"transformers"
] | null | false | trisongz | null | trisongz/biobert_large_cased | 0 | null | transformers | 36,191 | Entry not found |
trueto/medalbert-base-chinese | 9469a48b321e6739193f347eb46a721bb426b1a0 | 2021-03-26T05:29:51.000Z | [
"pytorch",
"albert",
"transformers"
] | null | false | trueto | null | trueto/medalbert-base-chinese | 0 | 1 | transformers | 36,192 | # [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
``` |
ttntran/DialoGPT-small-human | 88f7251ea8b30f007fd87e27fa2c806b78c50a7b | 2022-02-12T16:21:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ttntran | null | ttntran/DialoGPT-small-human | 0 | null | transformers | 36,193 | ---
tags:
- conversational
---
# Human GPT Model |
tuhailong/SimCSE-RoBRTa-wwm-ext | 74a3208c681cff0f8538c81258bca21abe89f202 | 2021-07-30T02:04:08.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | tuhailong | null | tuhailong/SimCSE-RoBRTa-wwm-ext | 0 | null | transformers | 36,194 | Entry not found |
tuhailong/SimCSE-electra-180g-small-generator | ff394261c13af73ac65c90b27a7d48af75a29273 | 2021-07-30T02:08:04.000Z | [
"pytorch",
"electra",
"transformers"
] | null | false | tuhailong | null | tuhailong/SimCSE-electra-180g-small-generator | 0 | null | transformers | 36,195 | Entry not found |
twdooley/breitbot | 745b89f42de48009e0ca8f7ae302b9c13012f58d | 2021-05-23T13:18:29.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | twdooley | null | twdooley/breitbot | 0 | null | transformers | 36,196 | <h1>BreitBot</h1><h2>Timothy W. Dooley</h2>___________________________________________________<h3>GitHub</h3>The GitHub for the project can be found [here](https://github.com/twdooley/election_news)<h3>Model</h3><br>This model was trained on about 16,000 headlines from Breitbart.com spannning March 2019- 11 November 2020. The purpose of this project was to better understand how strongly polarized news crafts a narrative through Natural Language Processing. The BreitBot model was specifically created to understand the 'clickbaity' nature of a Breitbart headline. Many of the results are 'reasonable' within the scope of Breitbart's production. I will leave it to the user to make further interpretation. The full project noted that over 70% of Breitbart's articles from month to month have a negative sentiment score. Subjectively, I believe this is shown through the headlines generated.<br><h3>Training</h3><br>BreitBot is a finetuned on GPT2 with about 16,000 headlines. The maximum length allowed in the tokenizer was the length of the longest headline (~50 tokens). A huge credit goes to Richard Bownes, PhD whose article ["Fine Tuning GPT-2 for Magic the Gathering Flavour Text Generation"](https://medium.com/swlh/fine-tuning-gpt-2-for-magic-the-gathering-flavour-text-generation-3bafd0f9bb93) provided incredible direction and help in training this model. It was trained using a GPU on Google Colab. |
tyoc213/wav2vec2-large-xlsr-nahuatl | 71c1843952f21227bc5d97d19e31a42dd8065a19 | 2021-04-07T02:59:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nah specifically ncj",
"dataset:created a new dataset based on https://www.openslr.org/92/",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tyoc213 | null | tyoc213/wav2vec2-large-xlsr-nahuatl | 0 | 1 | transformers | 36,197 |
---
language: nah specifically ncj
datasets:
- created a new dataset based on https://www.openslr.org/92/
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Nahuatl XLSR Wav2Vec 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER
type: wer
value: 69.11
---
# Wav2Vec2-Large-XLSR-53-ncj/nah
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Nahuatl specifically of the Nort of Puebla (ncj) using a derivate of [SLR92](https://www.openslr.org/92/), and some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice).
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") # TODO: publish nahuatl_slr92_by_sentence
processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Nahuatl specifically of the Nort of Puebla (ncj) test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "{lang_id}", split="test") # TODO: publish nahuatl_slr92_by_sentence
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\"\“\%\‘\”\�\(\)\-]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 50.95 %
## Training
A derivate of [SLR92](https://www.openslr.org/92/) to be published soon.And some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice)
The script used for training can be found [less60wer.ipynb](./less60wer.ipynb)
|
tyoyo/t5-base-TEDxJP-1body-1context | e1a95d19c7a3a5320518d5f5c085aab52050218d | 2021-12-05T20:01:50.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:te_dx_jp",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | tyoyo | null | tyoyo/t5-base-TEDxJP-1body-1context | 0 | null | transformers | 36,198 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- te_dx_jp
model-index:
- name: t5-base-TEDxJP-1body-1context
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-1body-1context
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5061
- Wer: 0.1990
- Mer: 0.1913
- Wil: 0.2823
- Wip: 0.7177
- Hits: 55830
- Substitutions: 6943
- Deletions: 3598
- Insertions: 2664
- Cer: 0.1763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.7277 | 1.0 | 746 | 0.5799 | 0.2384 | 0.2256 | 0.3188 | 0.6812 | 54323 | 7170 | 4878 | 3777 | 0.2371 |
| 0.6278 | 2.0 | 1492 | 0.5254 | 0.2070 | 0.1997 | 0.2905 | 0.7095 | 55045 | 6885 | 4441 | 2412 | 0.1962 |
| 0.5411 | 3.0 | 2238 | 0.5076 | 0.2022 | 0.1950 | 0.2858 | 0.7142 | 55413 | 6902 | 4056 | 2463 | 0.1805 |
| 0.53 | 4.0 | 2984 | 0.5020 | 0.1979 | 0.1911 | 0.2814 | 0.7186 | 55599 | 6849 | 3923 | 2362 | 0.1761 |
| 0.5094 | 5.0 | 3730 | 0.4999 | 0.1987 | 0.1915 | 0.2828 | 0.7172 | 55651 | 6944 | 3776 | 2465 | 0.1742 |
| 0.4783 | 6.0 | 4476 | 0.5016 | 0.1985 | 0.1914 | 0.2826 | 0.7174 | 55684 | 6947 | 3740 | 2490 | 0.1753 |
| 0.4479 | 7.0 | 5222 | 0.5035 | 0.1976 | 0.1905 | 0.2819 | 0.7181 | 55726 | 6961 | 3684 | 2468 | 0.1733 |
| 0.4539 | 8.0 | 5968 | 0.5022 | 0.1967 | 0.1896 | 0.2807 | 0.7193 | 55795 | 6938 | 3638 | 2477 | 0.1729 |
| 0.4632 | 9.0 | 6714 | 0.5034 | 0.1991 | 0.1913 | 0.2824 | 0.7176 | 55844 | 6942 | 3585 | 2687 | 0.1758 |
| 0.4201 | 10.0 | 7460 | 0.5061 | 0.1990 | 0.1913 | 0.2823 | 0.7177 | 55830 | 6943 | 3598 | 2664 | 0.1763 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
tyqiangz/xlm-roberta-base-finetuned-chaii | 1dc91eb2daaec34a85552996973fdade3dfac1db | 2021-08-17T13:48:43.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | question-answering | false | tyqiangz | null | tyqiangz/xlm-roberta-base-finetuned-chaii | 0 | null | transformers | 36,199 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: xlm-roberta-base-finetuned-chaii
results:
- task:
name: Question Answering
type: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-chaii
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.92 | 1.0 | 899 | 0.4482 |
| 0.8055 | 2.0 | 1798 | 0.3225 |
| 0.7485 | 3.0 | 2697 | 0.4651 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.