modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jj-co/gtr-t5-base | feaa8f3ea9278066ecf6777ba135beb425ea5c8c | 2022-02-24T19:57:08.000Z | [
"pytorch",
"t5",
"en",
"arxiv:2112.07899",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | jj-co | null | jj-co/gtr-t5-base | 3 | null | sentence-transformers | 21,900 | ---
pipeline_tag: feature-extraction
language: en
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/gtr-t5-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search.
This model was converted from the Tensorflow model [gtr-base-1](https://tfhub.dev/google/gtr/gtr-base/1) to PyTorch. When using this model, have a look at the publication: [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.
The model uses only the encoder from a T5-base model. The weights are stored in FP16.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/gtr-t5-base')
embeddings = model.encode(sentences)
print(embeddings)
```
The model requires sentence-transformers version 2.2.0 or newer.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/gtr-t5-base)
## Citing & Authors
If you find this model helpful, please cite the respective publication:
[Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899)
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2 | cb6899df22cabe4523e25b8fa62b6f7b6b56b9b4 | 2022-02-24T20:38:56.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2 | 3 | null | transformers | 21,901 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4 | ebb975b2d45078c3a3fbbb151ec33416fab14326 | 2022-02-24T22:24:38.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4 | 3 | null | transformers | 21,902 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10 | 396e21c38294922a5cc4988448f3999005e3b629 | 2022-02-24T23:09:57.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10 | 3 | null | transformers | 21,903 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mohamed-illiyas/wav2vec-malayalam-checkpoint | 6945a1c6bd11fc2172d01a71766f28f1232eb9c4 | 2022-02-25T09:24:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mohamed-illiyas | null | mohamed-illiyas/wav2vec-malayalam-checkpoint | 3 | null | transformers | 21,904 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-malayalam-checkpoint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-malayalam-checkpoint
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6457
- Wer: 0.6608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6371 | 10.0 | 100 | 3.5200 | 1.0 |
| 3.3014 | 20.0 | 200 | 3.2092 | 1.0 |
| 1.2997 | 30.0 | 300 | 0.7134 | 0.8847 |
| 0.5078 | 40.0 | 400 | 0.5805 | 0.7841 |
| 0.3795 | 50.0 | 500 | 0.5604 | 0.7289 |
| 0.2809 | 60.0 | 600 | 0.5962 | 0.7055 |
| 0.2381 | 70.0 | 700 | 0.6099 | 0.6938 |
| 0.2046 | 80.0 | 800 | 0.6237 | 0.6862 |
| 0.1826 | 90.0 | 900 | 0.6204 | 0.6755 |
| 0.1627 | 100.0 | 1000 | 0.6335 | 0.6751 |
| 0.1453 | 110.0 | 1100 | 0.6446 | 0.6739 |
| 0.1359 | 120.0 | 1200 | 0.6277 | 0.6648 |
| 0.1274 | 130.0 | 1300 | 0.6356 | 0.6573 |
| 0.1189 | 140.0 | 1400 | 0.6417 | 0.6601 |
| 0.1146 | 150.0 | 1500 | 0.6457 | 0.6608 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8 | 46a0868726ec1923001646fcdaeee1c37670779b | 2022-02-25T04:58:07.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8 | 3 | null | transformers | 21,905 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10 | d841bcf5d4d53efe4d81c6f851ef289461d8a685 | 2022-02-25T05:13:42.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10 | 3 | null | transformers | 21,906 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0 | 3f04a9c71313a84e626db1fa6a8198af2f7dc7a6 | 2022-02-25T05:30:55.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0 | 3 | null | transformers | 21,907 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
moshew/bert-tiny-aug-sst2-distilled_v2 | 3e45157e213e879afdfa02cdf0f67a3e625e8ac3 | 2022-02-28T08:56:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | moshew | null | moshew/bert-tiny-aug-sst2-distilled_v2 | 3 | null | transformers | 21,908 | Entry not found |
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8 | ef8a9059a50616c6438d3f663b810e97e8a91111 | 2022-02-25T06:39:41.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8 | 3 | null | transformers | 21,909 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-8 | b33a8a04835667dd7f8609873e339d71bd9f36d0 | 2022-02-25T08:21:44.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-8 | 3 | null | transformers | 21,910 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-16-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
khavitidala/finetuned-indobartv2-id-su | 084eeedac40186eebc4b6184572965e90b55a361 | 2022-02-25T09:23:22.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"id",
"dataset:Indo4B+",
"arxiv:2104.08200",
"transformers",
"indogpt",
"indobenchmark",
"indonlg",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | khavitidala | null | khavitidala/finetuned-indobartv2-id-su | 3 | null | transformers | 21,911 | ---
language: id
tags:
- indogpt
- indobenchmark
- indonlg
license: mit
inference: false
datasets:
- Indo4B+
---
# IndoBART-v2 Model fine-tuned version
Fine-tuned version of IndoBART-v2 with machine translation id->su using default hyperparameter from indoBART paper.
by Ryan Abdurohman
# IndoBART-v2 Model
[IndoBART-v2](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indobart-v2` | 132M | Indo4B-Plus (26 GB of text) |
## Authors
<b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
|
anas-awadalla/roberta-base-few-shot-k-32-finetuned-squad-seed-4 | df07ce9ae11c43473b916921abb037156ef71e69 | 2022-02-25T09:28:29.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-32-finetuned-squad-seed-4 | 3 | null | transformers | 21,912 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-32-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-2 | d1b8d28bc94d2c4530ea19d560d5a95e68ba3525 | 2022-02-25T12:34:14.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-2 | 3 | null | transformers | 21,913 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-128-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-128-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-8 | ce648987e45302079102ead795cead1f7bbd8394 | 2022-02-25T13:25:47.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-8 | 3 | null | transformers | 21,914 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-128-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-128-finetuned-squad-seed-8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-10 | fea17e9a6a2022f84d8f15cef29f63505a462245 | 2022-02-25T13:42:57.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-10 | 3 | null | transformers | 21,915 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-128-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-128-finetuned-squad-seed-10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Davlan/xlm-roberta-base-finetuned-english | 0d42740b94b9bd52bb1c3ad206c5ca14d272a7da | 2022-02-25T15:31:51.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-english | 3 | null | transformers | 21,916 | ---
license: apache-2.0
---
|
anas-awadalla/roberta-base-few-shot-k-512-finetuned-squad-seed-8 | 4b6d4afaa4928f61a4906c1cd6f9d61a4fb2b738 | 2022-02-25T16:49:04.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-512-finetuned-squad-seed-8 | 3 | null | transformers | 21,917 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-512-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-512-finetuned-squad-seed-8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-0 | f2bdd8e2700d5bbdacd7fbde7baf876686ce8303 | 2022-02-25T17:25:37.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-0 | 3 | null | transformers | 21,918 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-1024-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-10 | 6eab54e14c587142e7160df6903ce73aa52a4cd5 | 2022-02-25T19:01:18.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-10 | 3 | null | transformers | 21,919 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-1024-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-1024-finetuned-squad-seed-10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-4 | 021a822a556201310b2e184c999124902dc37ee5 | 2022-02-25T19:44:04.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-4 | 3 | null | transformers | 21,920 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-4
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-6 | ad55a08a06bcb700022ef9b177193316f89ff975 | 2022-02-25T19:58:15.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-6 | 3 | null | transformers | 21,921 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-6
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-8 | f9063681b66e04181e9fa4e45b25a6104102c7ec | 2022-02-25T20:13:14.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-8 | 3 | null | transformers | 21,922 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-8
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-6 | e7803d9577e9493d45093c95d14e3e114836c512 | 2022-02-25T22:58:38.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-6 | 3 | null | transformers | 21,923 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-6
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-4 | e8534d4fcd753b09a0873c58f825e4216124ba2b | 2022-02-26T04:23:41.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-4 | 3 | null | transformers | 21,924 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-4
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-8 | eb17086ab57cd94f079915f7eafd36e141452838 | 2022-02-26T04:54:14.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-8 | 3 | null | transformers | 21,925 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-8
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-0 | d40f456ca543eaaefaea512e26556751d0bb5dba | 2022-02-26T05:24:05.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-0 | 3 | null | transformers | 21,926 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-8 | 4b0b5e913cc1bdb7646a359b3fda9650edc8785e | 2022-02-26T07:53:21.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-8 | 3 | null | transformers | 21,927 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-8
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-0 | 00bd0ea2820d1d22d99c293e0ee0f953c3d450ad | 2022-02-26T08:25:44.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-0 | 3 | null | transformers | 21,928 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-6 | e0ce9e3c323fbf7f6587afe37b6e0ba87d9521c7 | 2022-02-26T09:16:54.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-6 | 3 | null | transformers | 21,929 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-6
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-8 | 0052f83b8dcc4cb79d17bbc26827fe5d84b738a9 | 2022-02-26T09:30:48.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-8 | 3 | null | transformers | 21,930 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-8
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
RobW/distilbert-base-cased-finetuned-chunk | 54c83f6503adf62749ea723bc2c2a7538ffa954f | 2022-02-26T10:00:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | RobW | null | RobW/distilbert-base-cased-finetuned-chunk | 3 | null | transformers | 21,931 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-cased-finetuned-chunk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-chunk
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5180
- Precision: 0.8615
- Recall: 0.9088
- F1: 0.8845
- Accuracy: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8391 | 1.0 | 878 | 0.5871 | 0.8453 | 0.9035 | 0.8734 | 0.8054 |
| 0.6134 | 2.0 | 1756 | 0.5447 | 0.8555 | 0.8983 | 0.8764 | 0.8142 |
| 0.5565 | 3.0 | 2634 | 0.5180 | 0.8615 | 0.9088 | 0.8845 | 0.8239 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Tokenizers 0.10.3
|
chitanda/merit-roberta-large-v2 | fa2a4c649e8c5f990883545109931941306f7e34 | 2022-02-26T12:56:39.000Z | [
"pytorch",
"roberta",
"transformers",
"license:mit"
] | null | false | chitanda | null | chitanda/merit-roberta-large-v2 | 3 | null | transformers | 21,932 | ---
license: mit
---
|
cnicu/led-booksum | 74efa931992ced2a7aedeaa97680f59b4fc5e3cb | 2022-02-28T12:12:55.000Z | [
"pytorch",
"led",
"text2text-generation",
"dataset:kmfoda/booksum",
"transformers",
"summarization",
"license:mit",
"autotrain_compatible"
] | summarization | false | cnicu | null | cnicu/led-booksum | 3 | null | transformers | 21,933 | ---
license: mit
tags:
- summarization
datasets:
- kmfoda/booksum
---
|
cnicu/pegasus-xsum-booksum | c99a82b747cb555172305922f27326ca6c1e9a52 | 2022-02-26T22:13:52.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | cnicu | null | cnicu/pegasus-xsum-booksum | 3 | null | transformers | 21,934 | ---
license: mit
---
|
nsi319/distilbert-base-uncased-finetuned-app | 0c04e35247420b8be70088d1b15897fcac0a25f3 | 2022-02-27T10:56:19.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"mobile app descriptions",
"playstore",
"license:mit"
] | text-classification | false | nsi319 | null | nsi319/distilbert-base-uncased-finetuned-app | 3 | null | transformers | 21,935 | ---
language: "en"
thumbnail: "https://huggingface.co/nsi319"
tags:
- distilbert
- pytorch
- text-classification
- mobile app descriptions
- playstore
license: "mit"
inference: true
---
# Mobile App Classification
## Model description
DistilBERT is a transformer model, smaller and faster than BERT, which was pre-trained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher.
The [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model is fine-tuned to classify an mobile app description into one of **6 play store categories**.
Trained on 9000 samples of English App Descriptions and associated categories of apps available in [Google Play](https://play.google.com/store/apps).
## Fine-tuning
The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 512. Since this was a classification task, the model was trained with a cross-entropy loss function. The best evaluation f1 score achieved by the model was 0.9034534096919489, found after 4 epochs. The accuracy of the model on the test set was 0.9033.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("nsi319/distilbert-base-uncased-finetuned-app")
model = AutoModelForSequenceClassification.from_pretrained("nsi319/distilbert-base-uncased-finetuned-app")
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
classifier("Disney+ has something for everyone and every mood, all in one place. With endless entertainment from Disney, Pixar, Marvel, Star Wars, National Geographic and Star, there's always something exciting to watch. Watch the latest releases, Original series and movies, classic films, throwbacks and so much more.")
'''Output'''
[{'label': 'Entertainment', 'score': 0.9014402031898499}]
```
## Limitations
Training data consists of apps from 6 play store categories namely Education, Entertainment, Productivity, Sports, News & Magazines and Photography.
|
Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa | 353604b90a581af2f80a49c873254b3b6f8330f6 | 2022-02-27T14:33:15.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"arxiv:2111.05754",
"transformers",
"autotrain_compatible"
] | question-answering | false | Intel | null | Intel/bert-base-uncased-squadv1.1-sparse-80-1x4-block-pruneofa | 3 | null | transformers | 21,936 | ---
language: en
---
# 80% 1x4 Block Sparse BERT-Base (uncased) Fine Tuned on SQuADv1.1
This model is a result of fine-tuning a Prune OFA 80% 1x4 block sparse pre-trained BERT-Base combined with knowledge distillation.
This model yields the following results on SQuADv1.1 development set:<br>
`{"exact_match": 81.2867, "f1": 88.4735}`
For further details see our paper, [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754), and our open source implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
cyl/adapter_t5-3b_stsb | 82e54e73adf6ab3a414842887295559aa8ff41e2 | 2022-02-27T14:38:57.000Z | [
"pytorch",
"transformers"
] | null | false | cyl | null | cyl/adapter_t5-3b_stsb | 3 | null | transformers | 21,937 | Entry not found |
MUNasir/umsuka-en-zu | 083adeed1e0ecc9a6cf9f0c34385c45860cce1d7 | 2022-03-01T17:28:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | MUNasir | null | MUNasir/umsuka-en-zu | 3 | null | transformers | 21,938 | #### Languages:
- Source language: English
- Source language: isiZulu
#### Model Details:
- model: transformer
- Architecture: MarianMT
- pre-processing: normalization + SentencePiece
#### Pre-trained Model:
- https://huggingface.co/Helsinki-NLP/opus-mt-en-xh
#### Corpus:
- Umsuka English-isiZulu Parallel Corpus (https://zenodo.org/record/5035171#.Yh5NIOhBy3A)
#### Benchmark:
| Benchmark | Train | Test |
|-----------|-------|-------|
| Umsuka | 17.61 | 13.73 |
#### GitHub:
- https://github.com/umair-nasir14/Geographical-Distance-Is-The-New-Hyperparameter
|
cassandra-themis/test_tcp_ca | 066be05b78099dd8d0a536823f5da0fdd50f2774 | 2022-02-27T20:08:32.000Z | [
"pytorch",
"camembert",
"token-classification",
"dataset:cassandra-themis/ner-tcp-ca",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | cassandra-themis | null | cassandra-themis/test_tcp_ca | 3 | null | transformers | 21,939 | ---
tags:
- generated_from_trainer
datasets:
- cassandra-themis/ner-tcp-ca
model-index:
- name: camembert-ner-tcp-ca
results: []
widget:
- text: "RÉPUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nCOUR D'APPEL D'AIX EN PROVENCE\n\n\n\n10e Chambre\n\n\n\nARRÊT MIXTE\n\nDU 14 JUIN 2006\n\n\n\nNo/2006\n\n\n\n\n\nRôle No 99/09967\n\n\n\n\n\nJohn X...\n\nArlette Y... épouse X...\n\nPatrick X...\n\n\n\n\n\nC/\n\n\n\nFONDS DE GARANTIE DES VICTIMES D'ACTES DE TERRORISME ET D'AUTRES INFRACTIONS\n\n\n\n\n\nDécision déférée à la Cour :\n\n\n\nDécision rendue le 20 Avril 1999 par la Commission d'Indemnisation des Victimes d'Infractions Pénales près le Tribunal de Grande Instance de MARSEILLE, enregistrée\n\nau répertoire général sous le no 98/00491.\n\n\n\n\n\nAPPELANTS\n\n\n\nMonsieur John X..., décédé\n\nné le 17 Mars 1973 à MARSEILLE (13000), demeurant ... - 13000 MARSEILLE\n\nreprésenté par la SCP COHEN - GUEDJ, avoués à la Cour\n\n\n\nMadame Arlette Y... épouse X...\n\nprise es qualité d'héritière de John X..., décédé le 25/11/2001\n\nnée le 18 Août 1951 à SAINT JEAN DE COLE (DORDOGNE), ... - 13012 MARSEILLE\n\nreprésentée par la SCP COHEN - GUEDJ, avoués à la Cour,\n\nassistée de la SELARL BAFFERT - FRUCTUS ET ASSOCIES, avocats au barreau de MARSEILLE\n\n\n\nMonsieur Patrick X...\n\npris en sa qualité d'héritier de John X..., décédé le 25/11/2001\n\nné le 12 Juin 1951 à MARSEILLE (BOUCHES DU RHÔNE), demeurant ... - 13012 MARSEILLE\n\nreprésenté par la SCP COHEN - GUEDJ, avoués à la Cour,\n\nassisté de la SELARL BAFFERT - FRUCTUS ET ASSOCIES, avocats au barreau de MARSEILLE\n\n\n\n\n\nINTIME\n\n\n\nFONDS DE GARANTIE DES VICTIMES D'ACTES DE TERRORISME ET D'AUTRES INFRACTIONS article L 422.1 du Code des Assurances, géré par le Fonds de Garantie contre les Accidents de Circulation et de Chasse, dont le siège social est sis 64 rue Defrance 94300 VINCENNES, 39 bd Vincent Delpuech - les Bureaux du Méditerranée - 13255 MARSEILLE\n\nreprésenté par la SCP GIACOMETTI - DESOMBRE, avoués à la Cour,\n\nassisté de Me Alain TUILLIER, avocat au barreau d'AIX EN PROVENCE\n\n\n\n\n\nCOMPOSITION DE LA COUR\n\n\n\nL'affaire a été débattue le 12 Avril 2006 en audience publique. Conformément à l'article 785 du Nouveau Code de Procédure Civile, Mr RAJBAUT, Conseiller a fait un rapport oral de l'affaire à l'audience avant les plaidoiries.\n\n\n\nLa Cour était composée de :\n\n\n\nMadame Elisabeth VIEUX, Présidente\n\nMonsieur Benjamin RAJBAUT, Conseiller\n\nMadame Dominique KLOTZ, Conseiller\n\n\n\n\n\nqui en ont délibéré\n\n\n\nGreffier lors des débats : Madame Geneviève JAUFFRES.\n\n\n\nLes parties ont été avisées que le prononcé public de la décision aura lieu par mise à disposition au greffe le 14 Juin 2006..\n\n\n\nMINISTÈRE PUBLIC :\n\nAuquel l'affaire a été régulièrement communiquée.\n\n"
example_title: "Exemple 1"
- text: "RÉPUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nPhD / BLL\n\n\n\nNuméro / 06\n\n\n\nCOUR D'APPEL DE PAU\n\n2ème CH-Section 1\n\n\n\nARRÊT DU 19 janvier 2006\n\n\n\nDossier : 04 / 03078\n\n\n\nNature affaire :\n\n\n\nAutres demandes relatives à un bail d'habitation ou à un bail professionnel\n\n\n\nAffaire :\n\n\n\nBerthe X... épouse Y...\n\n\n\nC /\n\n\n\nDominique Z...,\n\nCorinne X...\n\n\n\nRÉPUBLIQUE FRANÇAISE\n\n\n\nAU NOM DU PEUPLE FRANÇAIS\n\n\n\nA R R Ê T\n\n\n\nprononcé par Monsieur GRANGER, conseiller,\n\nen vertu de l'article 452 du Nouveau Code de Procédure Civile,\n\n\n\nassisté de Monsieur LASBIATES, Greffier,\n\n\n\nà l'audience publique du 19 janvier 2006\n\ndate indiquée à l'issue des débats.\n\n\n\n* * * * *\n\n\n\nAPRES DÉBATS\n\n\n\nà l'audience publique tenue le 24 Novembre 2005, devant :\n\n\n\nMonsieur DARRACQ, magistrat chargé du rapport,\n\n\n\nassisté de Monsieur LASBIATES, greffier présent à l'appel des causes,\n\n\n\nMonsieur DARRACQ, en application des articles 786 et 910 du Nouveau Code de Procédure Civile et à défaut d'opposition a tenu l'audience pour entendre les plaidoiries et en a rendu compte à la Cour composée de :\n\n\n\nMonsieur PETRIAT, Conseiller faisant fonction de Président, par suite de l'empêchement légitime de tous les titulaires et des magistrats désignés par ordonnance et se trouvant le magistrat du siège présent le plus ancien dans l'ordre de nomination à la Cour\n\n\n\nMonsieur GRANGER, Conseiller\n\nMonsieur DARRACQ, Vice-Président placé, désigné par ordonnance du 12 septembre 2005\n\n\n\nqui en ont délibéré conformément à la loi.\n\n\n\ndans l'affaire opposant :\n\n\n\nAPPELANTE :\n\n\n\nMadame Berthe X... épouse Y...\n\nnée le 13 Juin 1942 à ARCANGUES (64)\n\nde nationalité française\n\n...\n\n...\n\n12500 ESPALION\n\n\n\nreprésentée par la S. C. P. LONGIN C. ET P., avoués à la Cour\n\nassistée de Maître BLAZY-ANDRIEU, avocat au barreau de BAYONNE\n\n\n\nINTIMES :\n\n\n\nMonsieur Dominique Camille Z...\n\nné le 13 juin 1954 à Chatou (78)\n\n...\n\n...\n\n64200 BIARRITZ\n\n\n\nMadame Corinne X...\n\nnée le 3 juillet 1969 à Bidart (64)\n\n...\n\n...\n\n64200 BIARRITZ\n\n\n\n(bénéficient d'une aide juridictionnelle Totale numéro 2004 / 006320 du 24 / 02 / 2005 accordée par le bureau d'aide juridictionnelle de PAU)\n\n\n\nreprésentés par la S. C. P. F. PIAULT / M. LACRAMPE-CARRAZE, avoués à la Cour\n\nassistés de Maître FOURGEAU, avocat au barreau de BAYONNE\n\n\n\nsur appel de la décision\n\nen date du 24 AOUT 2004\n\nrendue par le TRIBUNAL D'INSTANCE DE BIARRITZ"
example_title: "Exemple 2"
- text: "RÉPUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nCOUR D'APPEL DE DOUAI\n\n\n\nTROISIÈME CHAMBRE\n\n\n\nARRÊT DU 26 / 01 / 2006\n\n\n\nBAUX RURAUX\n\n\n\nNo RG : 05 / 04854 jonction avec dossier RG No 05 / 04858\n\n\n\nTribunal paritaire des baux ruraux d'AVESNES SUR HELPE\n\ndu 27 Juillet 2005 jugements no 99 / 000010 et 04 / 000006\n\n\n\nAPPELANTE\n\nMadame Marie-Noëlle X... épouse Y...\n\nDemeurant\n\n...\n\n59138 PONT SUR SAMBRE\n\n\n\nreprésentée par Me STERLILN de la SCP JP STERLIN-C STERLIN, avocats au barreau d'AMIENS\n\n\n\nINTIMÉS\n\nMonsieur Michel Z...\n\nDemeurant\n\n...\n\n59138 BACHANT\n\n\n\nreprésenté par Me VILLESECHE de la SCP ROFFIAEN-LE FUR-VILLESECHE, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nMonsieur Avit X...\n\nDemeurant\n\n...\n\n59138 BACHANT\n\n\n\nreprésenté par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nMadame Marie-Christine X... épouse A...\n\nDemeurant\n\n...\n\n59750 FEIGNIES\n\n\n\nreprésentée par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\n\n\nMadame Marie-Claire X... épouse B...\n\nDemeurant\n\n...\n\n59550 PRISCHES\n\n\n\nreprésentée par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\n\n\nMadame Marie-Antoinette X... épouse C...\n\nDemeurant\n\n...\n\n59440 ST AUBIN\n\n\n\nreprésentée par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nCOMPOSITION DE LA COUR LORS DES DÉBATS ET DU DÉLIBÉRÉ\n\nMadame MERFELD, Président de chambre\n\nMadame CONVAIN, Conseiller\n\nMadame PAOLI, Conseiller\n\n---------------------\n\nGREFFIER LORS DES DÉBATS : Madame GAMEZ\n\n"
example_title: "Exemple 3"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-ner-tcp-ca
This model is a fine-tuned version of [cassandra-themis/camembert-base-juri](https://huggingface.co/cassandra-themis/camembert-base-juri) on the cassandra-themis/ner-tcp-ca full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
osanseviero/xlm-roberta-base-finetuned-panx-de | 5c71826bbb84fe551aa04b56369997c1df507b16 | 2022-02-27T21:34:59.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | osanseviero | null | osanseviero/xlm-roberta-base-finetuned-panx-de | 3 | null | transformers | 21,940 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8647022085959235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1344
- F1: 0.8647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2568 | 1.0 | 525 | 0.1596 | 0.8210 |
| 0.1279 | 2.0 | 1050 | 0.1368 | 0.8522 |
| 0.0814 | 3.0 | 1575 | 0.1344 | 0.8647 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.18.0
- Tokenizers 0.10.3
|
danny911kr/tapas_simsiam_mlm_1 | f8129e19ee3b5cad8de3b254ba1fd80b5bce6f09 | 2022-02-28T02:27:31.000Z | [
"pytorch",
"tapas",
"feature-extraction",
"transformers"
] | feature-extraction | false | danny911kr | null | danny911kr/tapas_simsiam_mlm_1 | 3 | null | transformers | 21,941 | Entry not found |
Kevincp560/bart-large-cnn-finetuned-pubmed | 28458c3d4a69027ab90f140ffb1139c58aaa6a07 | 2022-02-28T19:04:22.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/bart-large-cnn-finetuned-pubmed | 3 | null | transformers | 21,942 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 40.4866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8416
- Rouge1: 40.4866
- Rouge2: 16.7472
- Rougel: 24.9831
- Rougelsum: 36.4002
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.932 | 1.0 | 4000 | 1.8110 | 38.1151 | 15.2255 | 23.4286 | 34.2521 | 141.8905 |
| 1.7001 | 2.0 | 8000 | 1.7790 | 39.8217 | 16.3042 | 24.649 | 35.831 | 142.0 |
| 1.5 | 3.0 | 12000 | 1.7971 | 40.6108 | 17.0446 | 25.1977 | 36.5556 | 141.9865 |
| 1.3316 | 4.0 | 16000 | 1.8106 | 40.0466 | 16.4851 | 24.7094 | 36.0998 | 141.9335 |
| 1.1996 | 5.0 | 20000 | 1.8416 | 40.4866 | 16.7472 | 24.9831 | 36.4002 | 142.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
RobW/longformer-base-4096-finetuned-chunk-3 | 8979f668754d74441022d3f6e1edeeed3a8bd7b7 | 2022-03-03T12:03:34.000Z | [
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | RobW | null | RobW/longformer-base-4096-finetuned-chunk-3 | 3 | null | transformers | 21,943 | Entry not found |
neal49/distilbert-sst2-1 | faaf755a00064fa83f836fcb7b81f6dfae471065 | 2022-03-01T05:11:19.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | neal49 | null | neal49/distilbert-sst2-1 | 3 | null | transformers | 21,944 | Entry not found |
armageddon/bert-base-uncased-squad2-covid-qa-deepset | 1cc87527719518b56fa4b202b2ee2bea89588e65 | 2022-02-28T19:18:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | armageddon | null | armageddon/bert-base-uncased-squad2-covid-qa-deepset | 3 | null | transformers | 21,945 | ---
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: covid_qa_analysis_bert_base_uncased_squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_bert_base_uncased_squad2
This model is a fine-tuned version of [twmkn9/bert-base-uncased-squad2](https://huggingface.co/twmkn9/bert-base-uncased-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21 | 559f4dced8b9d17d0a7e379a3b3f0747a1436a8f | 2022-03-01T13:44:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21 | 3 | null | transformers | 21,946 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1212
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 15 | 0.1113 | 0.0 | 0.0 | 0.0 | 0.9752 |
| No log | 2.0 | 30 | 0.1069 | 0.0 | 0.0 | 0.0 | 0.9752 |
| No log | 3.0 | 45 | 0.0992 | 0.0 | 0.0 | 0.0 | 0.9752 |
| No log | 4.0 | 60 | 0.0938 | 0.0 | 0.0 | 0.0 | 0.9752 |
| No log | 5.0 | 75 | 0.0920 | 0.0 | 0.0 | 0.0 | 0.9752 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ghazikhanihamed/MembraneBERT | b0cdbe45184e639a833b412fb2d174682c850fd6 | 2022-03-01T13:48:08.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | ghazikhanihamed | null | ghazikhanihamed/MembraneBERT | 3 | null | transformers | 21,947 | ---
license: afl-3.0
---
|
batterydata/batteryscibert-cased-squad-v1 | 532c3f873374cc6a27fccda118ef64fec536fbc5 | 2022-03-03T20:29:14.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"transformers",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | batterydata | null | batterydata/batteryscibert-cased-squad-v1 | 3 | null | transformers | 21,948 | ---
language: en
tags: question answering
license: apache-2.0
datasets:
- squad
- batterydata/battery-device-data-qa
metrics: squad
---
# BatterySciBERT-cased for QA
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batteryscibert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.66,
"f1": 87.43,
```
Evaluated on the battery device dataset.
```
"precision": 65.09,
"recall": 84.56,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
batterydata/bert-base-cased-squad-v1 | 8b5fa824f9e3bc3e087ff5ea883088d22ee178c3 | 2022-03-03T19:54:26.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"transformers",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | batterydata | null | batterydata/bert-base-cased-squad-v1 | 3 | null | transformers | 21,949 | ---
language: en
tags: question answering
license: apache-2.0
datasets:
- squad
- batterydata/battery-device-data-qa
metrics: squad
---
# BERT-base-cased for QA
**Language model:** bert-base-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 2
base_LM_model = "bert-base-cased"
max_seq_len = 386
learning_rate = 5e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 81.30,
"f1": 88.58,
```
Evaluated on the battery device dataset.
```
"precision": 67.02,
"recall": 80.15,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
anasaqsme/distilbert-base-uncased-finetuned-squad | 3adbee4c348079c924b51dc1f56c0ed550e587c4 | 2022-03-13T08:15:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anasaqsme | null | anasaqsme/distilbert-base-uncased-finetuned-squad | 3 | null | transformers | 21,950 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
facebook/maskformer-swin-large-ade | 16ebe37986e532ec41e1102dea41270cf97d6e38 | 2022-04-04T16:02:08.000Z | [
"pytorch",
"maskformer",
"dataset:ade-20k",
"arxiv:2107.06278",
"transformers",
"vision",
"image-segmentatiom",
"license:apache-2.0"
] | null | false | facebook | null | facebook/maskformer-swin-large-ade | 3 | null | transformers | 21,951 | ---
license: apache-2.0
tags:
- vision
- image-segmentatiom
datasets:
- ade-20k
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# Mask
Mask model trained on ade-20k. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses semantic segmentation with a mask classification paradigm instead.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=maskformer) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to feature_extractor for postprocessing
>>> output = feature_extractor.post_process_segmentation(outputs)
>>> output = feature_extractor.post_process_semantic_segmentation(outputs)
>>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
BigSalmon/NEO125InformalToFormalLincoln | 78e9a8e5e46a2f8a7f566c860873d4a3dd0471fb | 2022-03-02T21:29:36.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/NEO125InformalToFormalLincoln | 3 | null | transformers | 21,952 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/NEO125InformalToFormalLincoln")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/NEO125InformalToFormalLincoln")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
``` |
yoavgur/gpt2-bash-history-baseline2 | d9cc2a147a00036870a527d664a32615ddbd2ad4 | 2022-03-02T23:43:15.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | yoavgur | null | yoavgur/gpt2-bash-history-baseline2 | 3 | null | transformers | 21,953 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-bash-history-baseline2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-bash-history-baseline2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 158 | 1.8653 |
| No log | 2.0 | 316 | 1.7574 |
| No log | 3.0 | 474 | 1.6939 |
| 1.9705 | 4.0 | 632 | 1.6597 |
| 1.9705 | 5.0 | 790 | 1.6480 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Kevincp560/t5-base-finetuned-pubmed | 813ed7ddf9d60ed155eabd78f9afad1c3c96f4a1 | 2022-03-03T16:06:16.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/t5-base-finetuned-pubmed | 3 | null | transformers | 21,954 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: t5-base-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 9.3771
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-pubmed
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6311
- Rouge1: 9.3771
- Rouge2: 3.7042
- Rougel: 8.4912
- Rougelsum: 9.0013
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.0957 | 1.0 | 4000 | 1.9006 | 8.6968 | 3.2473 | 7.9565 | 8.3224 | 19.0 |
| 2.0489 | 2.0 | 8000 | 1.8571 | 8.6877 | 3.2461 | 7.9311 | 8.2991 | 19.0 |
| 2.7345 | 3.0 | 12000 | 2.6112 | 9.585 | 3.0129 | 8.4729 | 9.1109 | 19.0 |
| 3.0585 | 4.0 | 16000 | 2.7222 | 9.7011 | 3.3549 | 8.6588 | 9.2646 | 19.0 |
| 2.9437 | 5.0 | 20000 | 2.6311 | 9.3771 | 3.7042 | 8.4912 | 9.0013 | 19.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
QuickRead/pegasus-reddit-16000 | 84b360634c463fd02f98d9ce832fca77205204a1 | 2022-03-05T03:57:20.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | QuickRead | null | QuickRead/pegasus-reddit-16000 | 3 | null | transformers | 21,955 | Entry not found |
Britain/DialoGPT-small-ZifBotTwoFixed | 9d0cada4c8bf8f299699cb08e121de4291d9e333 | 2022-03-05T03:43:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Britain | null | Britain/DialoGPT-small-ZifBotTwoFixed | 3 | null | transformers | 21,956 | ---
tags:
- conversational
---
# ZifBotTwoFixed |
Britain/DialoGPT-small-DanyBotThree | b6371fc288974d850fd914e36e65cde11065f6ea | 2022-03-05T05:50:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Britain | null | Britain/DialoGPT-small-DanyBotThree | 3 | null | transformers | 21,957 | ---
tags:
- conversational
---
# DanyBot |
fabianrausch/german-financial-statements-bert | 2ab7c15a64da71b35ca63bb37c54f1000e28582d | 2022-03-16T09:58:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | fabianrausch | null | fabianrausch/german-financial-statements-bert | 3 | null | transformers | 21,958 | ---
license: mit
language: de
---
# german-financial-statements-bert
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) using German financial statements.
It achieves the following results on the evaluation set:
- Loss: 1.2025
- Accuracy: 0.7376
- Perplexity: 3.3285
## Model description
Annual financial statements in Germany are published in the Federal Gazette and are freely accessible. The documents describe the entrepreneurial and in particular the financial situation of a company with reference to a reporting period. The german-financial-statements-bert model aims to provide a BERT model specifically for this domain.
## Training and evaluation data
The training was performed with 100,000 natural language sentences from annual financial statements. 50,000 of these sentences were taken unfiltered and randomly from 5,500 different financial statement documents, and another 50,000 were also taken randomly from 5,500 different financial statement documents, but this half was filtered so that only sentences referring to a financial entity were selected. Specifically, this means that the second half of the sentences contains an indicator for a reference to a financial entity (EUR, Euro, TEUR, €, T€). The evaluation was carried out with 20,000 sentences of the same origin and distribution.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
QuickRead/Reward_training_Pegasus_reddit | 1f61710411ee4ce73ca1efb267a1761cb696c282 | 2022-04-20T19:01:39.000Z | [
"pytorch",
"pegasus",
"feature-extraction",
"transformers"
] | feature-extraction | false | QuickRead | null | QuickRead/Reward_training_Pegasus_reddit | 3 | null | transformers | 21,959 | Entry not found |
maksym/bert-base-uncased-finetuned-swag | b2d0c56f82ffa02079891c119971882a774995a9 | 2022-03-05T23:45:29.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | maksym | null | maksym/bert-base-uncased-finetuned-swag | 3 | null | transformers | 21,960 | Entry not found |
Kevincp560/distilbart-cnn-12-6-finetuned-pubmed | 0dc1cd0a6d8c01147104ebe27b257a1443678da0 | 2022-03-06T22:33:03.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/distilbart-cnn-12-6-finetuned-pubmed | 3 | 1 | transformers | 21,961 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 40.0985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-pubmed
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9895
- Rouge1: 40.0985
- Rouge2: 16.5016
- Rougel: 24.8319
- Rougelsum: 36.0775
- Gen Len: 141.884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.1709 | 1.0 | 4000 | 2.0257 | 38.1012 | 15.112 | 23.4064 | 33.9373 | 141.9195 |
| 1.9495 | 2.0 | 8000 | 1.9593 | 39.529 | 16.1693 | 24.487 | 35.5238 | 141.9785 |
| 1.756 | 3.0 | 12000 | 1.9488 | 39.9623 | 16.5799 | 24.949 | 35.9194 | 141.8855 |
| 1.6032 | 4.0 | 16000 | 1.9732 | 39.672 | 16.1994 | 24.5996 | 35.7021 | 141.921 |
| 1.4817 | 5.0 | 20000 | 1.9895 | 40.0985 | 16.5016 | 24.8319 | 36.0775 | 141.884 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
cammy/bart-large-cnn-finetuned-weaksup-1000-pad-early-new | 4c3ef05e606524ff6e4c97d6b9237ddbbc3fe10e | 2022-03-06T17:51:08.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-weaksup-1000-pad-early-new | 3 | null | transformers | 21,962 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-1000-pad-early-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000-pad-early-new
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4896
- Rouge1: 29.4505
- Rouge2: 14.4038
- Rougel: 23.1757
- Rougelsum: 26.3813
- Gen Len: 66.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.154 | 1.0 | 1000 | 0.4255 | 27.2971 | 12.4331 | 20.851 | 23.9583 | 66.64 |
| 0.0806 | 2.0 | 2000 | 0.4896 | 29.4505 | 14.4038 | 23.1757 | 26.3813 | 66.55 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Britain/DialoGPT-small-DanyBotTwoNew | 905476a8d07f2714bb52d4f3d225c11d5d3d9ee7 | 2022-03-06T19:11:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Britain | null | Britain/DialoGPT-small-DanyBotTwoNew | 3 | null | transformers | 21,963 | ---
tags:
- conversational
---
# DanyBot |
armageddon/roberta-base-squad2-covid-qa-deepset | 80acbdd8ed68c0eeaedf51ca954e92028840a79c | 2022-02-28T22:34:27.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | armageddon | null | armageddon/roberta-base-squad2-covid-qa-deepset | 3 | null | transformers | 21,964 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: covid_qa_analysis_roberta-base-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_roberta-base-squad2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
squirro/distilroberta-base-squad_v2 | 5800a52e9e6511e2499fdbd8d9df0922b11da14b | 2022-06-29T08:53:58.000Z | [
"pytorch",
"tf",
"onnx",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | squirro | null | squirro/distilroberta-base-squad_v2 | 3 | 1 | transformers | 21,965 | ---
license: apache-2.0
language: en
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilroberta-base-squad_v2
results:
- task:
name: Question Answering
type: question-answering
dataset:
type: squad_v2 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: The Stanford Question Answering Dataset
args: en
metrics:
- type: eval_exact
value: 65.2405
- type: eval_f1
value: 68.6265
- type: eval_HasAns_exact
value: 67.5776
- type: eval_HasAns_f1
value: 74.3594
- type: eval_NoAns_exact
value: 62.91
- type: eval_NoAns_f1
value: 62.91
---
# distilroberta-base-squad_v2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad_v2 dataset.
## Model description
This model is fine-tuned on the extractive question answering task -- The Stanford Question Answering Dataset -- [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/).
For convenience this model is prepared to be used with the frameworks `PyTorch`, `Tensorflow` and `ONNX`.
## Intended uses & limitations
This model can handle mismatched question-context pairs. Make sure to specify `handle_impossible_answer=True` when using `QuestionAnsweringPipeline`.
__Example usage:__
```python
>>> from transformers import AutoModelForQuestionAnswering, AutoTokenizer, QuestionAnsweringPipeline
>>> model = AutoModelForQuestionAnswering.from_pretrained("squirro/distilroberta-base-squad_v2")
>>> tokenizer = AutoTokenizer.from_pretrained("squirro/distilroberta-base-squad_v2")
>>> qa_model = QuestionAnsweringPipeline(model, tokenizer)
>>> qa_model(
>>> question="What's your name?",
>>> context="My name is Clara and I live in Berkeley.",
>>> handle_impossible_answer=True # important!
>>> )
{'score': 0.9498472809791565, 'start': 11, 'end': 16, 'answer': 'Clara'}
```
## Training and evaluation data
Training and evaluation was done on [SQuAD2.0](https://huggingface.co/datasets/squad_v2).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Metric | Value |
|:-------------------------|-------------:|
| epoch | 3 |
| eval_HasAns_exact | 67.5776 |
| eval_HasAns_f1 | 74.3594 |
| eval_HasAns_total | 5928 |
| eval_NoAns_exact | 62.91 |
| eval_NoAns_f1 | 62.91 |
| eval_NoAns_total | 5945 |
| eval_best_exact | 65.2489 |
| eval_best_exact_thresh | 0 |
| eval_best_f1 | 68.6349 |
| eval_best_f1_thresh | 0 |
| eval_exact | 65.2405 |
| eval_f1 | 68.6265 |
| eval_samples | 12165 |
| eval_total | 11873 |
| train_loss | 1.40336 |
| train_runtime | 1365.28 |
| train_samples | 131823 |
| train_samples_per_second | 289.662 |
| train_steps_per_second | 0.567 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
---
# About Us
<img src="https://squirro.com/wp-content/themes/squirro/img/squirro_logo.svg" alt="Squirro Logo" width="250"/>
Squirro marries data from any source with your intent, and your context to intelligently augment decision-making - right when you need it!
An Insight Engine at its core, Squirro works with global organizations, primarily in financial services, public sector, professional services, and manufacturing, among others. Customers include Bank of England, European Central Bank (ECB), Deutsche Bundesbank, Standard Chartered, Henkel, Armacell, Candriam, and many other world-leading firms.
Founded in 2012, Squirro is currently present in Zürich, London, New York, and Singapore. Further information about AI-driven business insights can be found at http://squirro.com.
## Social media profiles:
- Redefining AI Podcast (Spotify): https://open.spotify.com/show/6NPLcv9EyaD2DcNT8v89Kb
- Redefining AI Podcast (Apple Podcasts): https://podcasts.apple.com/us/podcast/redefining-ai/id1613934397
- Squirro LinkedIn: https://www.linkedin.com/company/squirroag
- Squirro Academy LinkedIn: https://www.linkedin.com/showcase/the-squirro-academy
- Twitter: https://twitter.com/Squirro
- Facebook: https://www.facebook.com/squirro
- Instagram: https://www.instagram.com/squirro/
|
cammy/bart-large-cnn-1000-pad-early-lit | 83874d8a339c68a668ee15d8df2ee3527bbf845c | 2022-03-07T10:56:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-1000-pad-early-lit | 3 | null | transformers | 21,966 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-1000-pad-early-lit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-1000-pad-early-lit
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4800
- Rouge1: 28.4538
- Rouge2: 13.5656
- Rougel: 22.2066
- Rougelsum: 25.3361
- Gen Len: 66.53
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1556 | 1.0 | 1000 | 0.4383 | 29.1275 | 14.1415 | 22.5802 | 26.37 | 65.93 |
| 0.0853 | 2.0 | 2000 | 0.4800 | 28.4538 | 13.5656 | 22.2066 | 25.3361 | 66.53 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
koenvdv/my-test-model | bbd3b526dc553b2055e90e38bdec3d126e3375b3 | 2022-03-08T08:28:03.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | koenvdv | null | koenvdv/my-test-model | 3 | null | transformers | 21,967 | Entry not found |
fenixobia/distilbert-base-uncased-finetuned-cola | 5c62c094f55e084f390ff18217ea5ee54a2e68bf | 2022-03-14T11:52:00.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | fenixobia | null | fenixobia/distilbert-base-uncased-finetuned-cola | 3 | null | transformers | 21,968 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5595884617444483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7808
- Matthews Correlation: 0.5596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.522 | 1.0 | 535 | 0.5361 | 0.4215 |
| 0.3472 | 2.0 | 1070 | 0.5309 | 0.5046 |
| 0.2342 | 3.0 | 1605 | 0.6451 | 0.5351 |
| 0.1673 | 4.0 | 2140 | 0.7808 | 0.5596 |
| 0.1249 | 5.0 | 2675 | 0.8750 | 0.5565 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
MrAnderson/nystrom-1024-full-trivia | 6371bc5437414f5c4d0cfb8d57d7c88c62173bee | 2022-03-08T15:39:40.000Z | [
"pytorch",
"nystromformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | MrAnderson | null | MrAnderson/nystrom-1024-full-trivia | 3 | null | transformers | 21,969 | Entry not found |
gayanin/t5-small-med-term-mlm | 3e6a9457bf7fe5d5105c1ca052b67f0016b14392 | 2022-03-08T11:46:57.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | gayanin | null | gayanin/t5-small-med-term-mlm | 3 | null | transformers | 21,970 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-med-term-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-med-term-mlm
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4736
- Rouge2 Precision: 0.7731
- Rouge2 Recall: 0.5541
- Rouge2 Fmeasure: 0.6251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6498 | 1.0 | 15827 | 0.5480 | 0.7629 | 0.5457 | 0.6161 |
| 0.5674 | 2.0 | 31654 | 0.4989 | 0.7697 | 0.551 | 0.622 |
| 0.5631 | 3.0 | 47481 | 0.4795 | 0.7726 | 0.5541 | 0.625 |
| 0.534 | 4.0 | 63308 | 0.4736 | 0.7731 | 0.5541 | 0.6251 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
z-uo/bert-qasper | cf52cceb3e1d52950818d86dc18b77f222dc05ef | 2022-03-08T18:31:21.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:z-uo/qasper-squad",
"transformers",
"autotrain_compatible"
] | question-answering | false | z-uo | null | z-uo/bert-qasper | 3 | null | transformers | 21,971 | ---
language: en
datasets:
- z-uo/qasper-squad
---
# bert-base for QA with qasper
Train from bert-base-uncased.
How to use by python code:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model with pipeline
model_name = "z-uo/bert-qasper"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
# Get predictions
QA_input = {
'question': 'what they propose?',
'context': "In this paper, we provide an innovative contribution in the research domain dedicated to crop mapping by exploiting the of Sentinel-2 satellite images time series, with the specific aim to extract information on 'where and when' crops are grown. The final goal is to set up a workflow able to reliably identify (classify) the different crops that are grown in a given area by exploiting an end-to-end (3+2)D convolutional neural network (CNN) for semantic segmentation. The method also has the ambition to provide information, at pixel level, regarding the period in which a given crop is cultivated during the season. To this end, we propose a solution called Class Activation Interval (CAI) which allows us to interpret, for each pixel, the reasoning made by CNN in the classification determining in which time interval, of the input time series, the class is likely to be present or not. Our experiments, using a public domain dataset, show that the approach is able to accurately detect crop classes with an overall accuracy of about 93% and that the network can detect discriminatory time intervals in which crop is cultivated. These results have twofold importance: (i) demonstrate the ability of the network to correctly interpret the investigated physical process (i.e., bare soil condition, plant growth, senescence and harvesting according to specific cultivated variety) and (ii) provide further information to the end-user (e.g., the presence of crops and its temporal dynamics)."
}
res = nlp(QA_input)
# Load model & tokenizer without pipeline
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
``` |
alirezafarashah/wav2vec2-base-ks | 752406f88b75bf09114b63d1b7631fd58acc86fb | 2022-03-08T11:41:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | alirezafarashah | null | alirezafarashah/wav2vec2-base-ks | 3 | null | transformers | 21,972 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0982
- Accuracy: 0.9825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.8465 | 1.0 | 399 | 0.8179 | 0.7516 |
| 0.2962 | 2.0 | 798 | 0.9771 | 0.2077 |
| 0.1891 | 3.0 | 1197 | 0.9819 | 0.1195 |
| 0.19 | 4.0 | 1596 | 0.9825 | 0.0982 |
| 0.1685 | 5.0 | 1995 | 0.9825 | 0.0952 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
mrm8488/electricidad-small-finetuned-paws-x-es | d0b55c70e67c40e6b28174ae98f316386e18ff19 | 2022-03-07T20:22:24.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers"
] | text-classification | false | mrm8488 | null | mrm8488/electricidad-small-finetuned-paws-x-es | 3 | null | transformers | 21,973 | Entry not found |
negfir/SQUAD10L | 4c98bbf3e0efaaacc63213967a1a8a2713852039 | 2022-03-09T01:32:54.000Z | [
"pytorch",
"squeezebert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | negfir | null | negfir/SQUAD10L | 3 | null | transformers | 21,974 | Entry not found |
EngNada/wav2vec2-large-xlsr-53-demo1 | 5ebc3afe82bc14241acba15a3a738c8f4a92aa13 | 2022-03-09T20:54:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | EngNada | null | EngNada/wav2vec2-large-xlsr-53-demo1 | 3 | null | transformers | 21,975 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-demo1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9692
- Wer: 0.8462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.978 | 0.06 | 100 | 3.5377 | 1.0 |
| 3.5026 | 0.13 | 200 | 3.4366 | 1.0 |
| 3.4084 | 0.19 | 300 | 3.3831 | 1.0 |
| 3.3551 | 0.26 | 400 | 3.2563 | 1.0 |
| 3.2668 | 0.32 | 500 | 3.2109 | 1.0 |
| 2.9398 | 0.38 | 600 | 2.4548 | 0.9987 |
| 2.2204 | 0.45 | 700 | 1.8870 | 1.0135 |
| 1.7401 | 0.51 | 800 | 1.6816 | 1.0247 |
| 1.5748 | 0.57 | 900 | 1.4741 | 0.9953 |
| 1.4539 | 0.64 | 1000 | 1.4573 | 0.9852 |
| 1.3612 | 0.7 | 1100 | 1.3534 | 0.9529 |
| 1.3328 | 0.77 | 1200 | 1.3380 | 0.9320 |
| 1.2459 | 0.83 | 1300 | 1.2984 | 0.9247 |
| 1.1976 | 0.89 | 1400 | 1.2515 | 0.9252 |
| 1.1593 | 0.96 | 1500 | 1.2345 | 0.9030 |
| 1.1094 | 1.02 | 1600 | 1.2135 | 0.9305 |
| 1.0485 | 1.09 | 1700 | 1.2045 | 0.9121 |
| 0.9893 | 1.15 | 1800 | 1.1876 | 0.8990 |
| 1.0099 | 1.21 | 1900 | 1.1663 | 0.8889 |
| 0.982 | 1.28 | 2000 | 1.1674 | 0.8901 |
| 0.9975 | 1.34 | 2100 | 1.1181 | 0.8812 |
| 0.952 | 1.4 | 2200 | 1.1119 | 0.8817 |
| 0.9311 | 1.47 | 2300 | 1.0786 | 0.8773 |
| 0.9398 | 1.53 | 2400 | 1.1016 | 0.8720 |
| 0.9148 | 1.6 | 2500 | 1.0878 | 0.8778 |
| 0.9114 | 1.66 | 2600 | 1.1004 | 0.8712 |
| 0.902 | 1.72 | 2700 | 1.0223 | 0.8744 |
| 0.8978 | 1.79 | 2800 | 1.0616 | 0.8459 |
| 0.8675 | 1.85 | 2900 | 1.0974 | 0.8643 |
| 0.8373 | 1.92 | 3000 | 1.0389 | 0.8547 |
| 0.8575 | 1.98 | 3100 | 1.0388 | 0.8480 |
| 0.8313 | 2.04 | 3200 | 1.0001 | 0.8648 |
| 0.7357 | 2.11 | 3300 | 1.0222 | 0.8705 |
| 0.743 | 2.17 | 3400 | 1.0859 | 0.8765 |
| 0.7306 | 2.23 | 3500 | 1.0109 | 0.8515 |
| 0.7525 | 2.3 | 3600 | 0.9942 | 0.8619 |
| 0.7308 | 2.36 | 3700 | 1.0004 | 0.8578 |
| 0.7266 | 2.43 | 3800 | 1.0003 | 0.8497 |
| 0.737 | 2.49 | 3900 | 1.0146 | 0.8505 |
| 0.7202 | 2.55 | 4000 | 1.0172 | 0.8653 |
| 0.6945 | 2.62 | 4100 | 0.9894 | 0.8415 |
| 0.6633 | 2.68 | 4200 | 0.9894 | 0.8496 |
| 0.6972 | 2.75 | 4300 | 0.9805 | 0.8505 |
| 0.6872 | 2.81 | 4400 | 0.9939 | 0.8509 |
| 0.7238 | 2.87 | 4500 | 0.9740 | 0.8532 |
| 0.6847 | 2.94 | 4600 | 0.9692 | 0.8462 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
MrAnderson/yoso-1024-full-trivia | 0edb95d90b671a25df6ab0aee717619ca690bfb1 | 2022-03-09T16:29:23.000Z | [
"pytorch",
"yoso",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | MrAnderson | null | MrAnderson/yoso-1024-full-trivia | 3 | null | transformers | 21,976 | Entry not found |
ctoraman/RoBERTa-TR-medium-bpe-7k | 6dae64fc9aeec2b1154e83f78e40c42d71c7ff1c | 2022-04-20T06:56:02.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-bpe-7k | 3 | null | transformers | 21,977 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium BPE 7k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 7.5k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
ctoraman/RoBERTa-TR-medium-bpe-28k | d710d28207b9682c976c09e5a430fa617831975b | 2022-04-20T06:48:33.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-bpe-28k | 3 | null | transformers | 21,978 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium BPE 28k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 28.6k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
ctoraman/RoBERTa-TR-medium-bpe-44k | b3604dba368e75494b33297b5202ca487ec68b82 | 2022-04-20T06:54:12.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-bpe-44k | 3 | null | transformers | 21,979 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium BPE 44k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 44.5k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
ctoraman/RoBERTa-TR-medium-bpe-66k | ef980f617c4d04ac3b826ea6d8d3808b4051a7b0 | 2022-04-20T06:54:49.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-bpe-66k | 3 | null | transformers | 21,980 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium BPE 66k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 66.7k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
ctoraman/RoBERTa-TR-medium-morph-28k | a7b678dc50a02b1525790e5cebf24b9259ca278e | 2022-04-20T06:57:33.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-morph-28k | 3 | null | transformers | 21,981 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium Morph-level 28k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Morph-level, which means that text is split according to a Turkish morphological analyzer (Zemberek). Vocabulary size is 28.3k.
## Note that this model needs a preprocessing step before running, because the tokenizer file is not a morphological anaylzer. That is, the test dataset can not be split into morphemes with the tokenizer file. The user needs to process any test dataset by a Turkish morphological analyzer (Zemberek in this case) before running evaluation.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
ctoraman/RoBERTa-TR-medium-word-28k | 8c7256ce3413f674f023032150181f61f1da7988 | 2022-04-20T07:00:00.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-word-28k | 3 | null | transformers | 21,982 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium Word-level 28k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 28.6k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
``` |
ctoraman/RoBERTa-TR-medium-word-44k | ed940239ec8fee8a908492e0f1b5cbed4164ae7e | 2022-04-20T06:47:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-word-44k | 3 | null | transformers | 21,983 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium Word-level 44k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 44.5k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
Kevincp560/bigbird-pegasus-large-arxiv-finetuned-pubmed | 6a50e1d2ea91c24af1740144dc97259d8155e105 | 2022-03-09T19:30:11.000Z | [
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/bigbird-pegasus-large-arxiv-finetuned-pubmed | 3 | 0 | transformers | 21,984 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: bigbird-pegasus-large-arxiv-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 45.4807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-pegasus-large-arxiv-finetuned-pubmed
This model is a fine-tuned version of [google/bigbird-pegasus-large-arxiv](https://huggingface.co/google/bigbird-pegasus-large-arxiv) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6049
- Rouge1: 45.4807
- Rouge2: 20.0199
- Rougel: 28.3621
- Rougelsum: 41.4618
- Gen Len: 219.144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.594 | 1.0 | 500 | 1.9879 | 33.6364 | 13.5074 | 21.4286 | 29.7158 | 189.014 |
| 1.9146 | 2.0 | 1000 | 1.6494 | 44.0056 | 19.0069 | 27.5142 | 40.0492 | 210.528 |
| 1.7378 | 3.0 | 1500 | 1.6213 | 44.7071 | 19.3559 | 27.6806 | 40.6124 | 213.596 |
| 1.692 | 4.0 | 2000 | 1.6081 | 45.1505 | 19.7355 | 28.06 | 41.0108 | 213.674 |
| 1.6656 | 5.0 | 2500 | 1.6049 | 45.4807 | 20.0199 | 28.3621 | 41.4618 | 219.144 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
kevinjesse/codebert-MT4TS | e92091fe10ce32a103402bb41115ac2e7c7536f2 | 2022-03-09T18:51:44.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kevinjesse | null | kevinjesse/codebert-MT4TS | 3 | null | transformers | 21,985 | Entry not found |
SuperAI2-Machima/mt5-small-thai_translation_th-en_en-th_V2 | d233fa1f5738092c141967c6d093f59c094c98f6 | 2022-03-09T19:15:49.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SuperAI2-Machima | null | SuperAI2-Machima/mt5-small-thai_translation_th-en_en-th_V2 | 3 | null | transformers | 21,986 | Entry not found |
amanm27/bert-base-uncased-sports | 491cd3cdad12ef854d2b6e252b549045b40ab05e | 2022-03-10T06:40:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | amanm27 | null | amanm27/bert-base-uncased-sports | 3 | null | transformers | 21,987 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-sports
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sports
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4926 | 1.0 | 912 | 2.1186 |
| 2.2168 | 2.0 | 1824 | 2.0392 |
| 2.1327 | 3.0 | 2736 | 2.0081 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
davanstrien/vit-base-patch16-224-in21k-base-manuscripts | d2c85823b39c4ab49d0bc731050cfcad578aff92 | 2022-03-10T08:01:01.000Z | [
"pytorch",
"tensorboard",
"vit",
"transformers",
"masked-image-modeling",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | davanstrien | null | davanstrien/vit-base-patch16-224-in21k-base-manuscripts | 3 | null | transformers | 21,988 | ---
license: apache-2.0
tags:
- masked-image-modeling
- generated_from_trainer
model-index:
- name: vit-base-patch16-224-in21k-base-manuscripts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-base-manuscripts
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the davanstrien/iiif_manuscripts_label_ge_50 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1333
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5198 | 1.0 | 32 | 0.5208 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
pratt3000/wav2vec2-base-finetuned-ks | d425c9fc875f2e46132035f8a8db8ffdef27ae2d | 2022-03-11T12:23:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | pratt3000 | null | pratt3000/wav2vec2-base-finetuned-ks | 3 | null | transformers | 21,989 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
name: wav2vec2-base-finetuned-ks
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0029
- Accuracy: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0037 | 1.0 | 400 | 0.0054 | 0.9991 |
| 0.0007 | 2.0 | 800 | 0.0029 | 0.9997 |
| 0.0004 | 3.0 | 1200 | 0.0028 | 0.9997 |
| 0.0003 | 4.0 | 1600 | 0.0029 | 0.9997 |
| 0.0003 | 5.0 | 2000 | 0.0028 | 0.9997 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.10.3
|
cambridgeltl/c2_mbert_de2tr_1k | c8a913f85f8663b7f6288e990c0966044c65fd31 | 2022-03-10T14:26:35.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/c2_mbert_de2tr_1k | 3 | null | transformers | 21,990 | Entry not found |
Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak | eca7dafd7dfea9f634dc91b08315ef7c9e1b6bfc | 2022-03-10T16:53:08.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | Ameer05 | null | Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak | 3 | null | transformers | 21,991 | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak
This model is a fine-tuned version of [Ameer05/model-token-repo](https://huggingface.co/Ameer05/model-token-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4511
- Rouge1: 59.76
- Rouge2: 52.1999
- Rougel: 57.3631
- Rougelsum: 59.3075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.91 | 5 | 2.0185 | 52.2186 | 45.4675 | 49.3152 | 51.9415 |
| No log | 1.91 | 10 | 1.6571 | 60.7728 | 52.8611 | 57.3487 | 60.1676 |
| No log | 2.91 | 15 | 1.5323 | 60.5674 | 52.2246 | 57.9846 | 60.073 |
| No log | 3.91 | 20 | 1.4556 | 61.2167 | 53.5087 | 58.9609 | 60.893 |
| 1.566 | 4.91 | 25 | 1.4632 | 62.918 | 55.4544 | 60.7116 | 62.6614 |
| 1.566 | 5.91 | 30 | 1.4360 | 60.4173 | 52.5859 | 57.8131 | 59.8864 |
| 1.566 | 6.91 | 35 | 1.4361 | 61.4273 | 53.9663 | 59.4445 | 60.9672 |
| 1.566 | 7.91 | 40 | 1.4477 | 60.3401 | 52.7276 | 57.7504 | 59.8209 |
| 0.6928 | 8.91 | 45 | 1.4511 | 59.76 | 52.1999 | 57.3631 | 59.3075 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.10.3
|
Vasily/reduce | f59b4d713b7f89770537796b4d2b81ea853e69da | 2022-03-11T12:25:18.000Z | [
"pytorch",
"distilbert",
"transformers"
] | null | false | Vasily | null | Vasily/reduce | 3 | null | transformers | 21,992 | Entry not found |
l3cube-pune/mahahate-bert | b0fd4b921b1ce4adcb81b2c3dcecd73ba2576e1c | 2022-06-26T14:43:01.000Z | [
"pytorch",
"bert",
"text-classification",
"mr",
"dataset:L3Cube-MahaHate",
"arxiv:2203.13778",
"transformers",
"license:cc-by-4.0"
] | text-classification | false | l3cube-pune | null | l3cube-pune/mahahate-bert | 3 | null | transformers | 21,993 | ---
language: mr
tags:
license: cc-by-4.0
datasets:
- L3Cube-MahaHate
widget:
- text: "I like you. </s></s> I love you."
---
## MahaHate-BERT
MahaHate-BERT (Marathi Hate speech identification) is a MahaBERT(l3cube-pune/marathi-bert) model fine-tuned on L3Cube-MahaHate - a Marathi tweet-based hate speech detection dataset. This is a two-class model with labels as hate (LABEL_1) and not (LABEL_0). The 4-class model can be found <a href='https://huggingface.co/l3cube-pune/mahahate-multi-roberta'> here </a>
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2203.13778) |
Splend1dchan/deberta-large-slue-goldtrascription-e50 | 327d25d951e89f00f195c330ee7dae55a4e7ace9 | 2022-03-12T10:30:29.000Z | [
"pytorch",
"deberta",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/deberta-large-slue-goldtrascription-e50 | 3 | null | transformers | 21,994 | Deberta large trained on slue transcriptions for 50 epochs, lr = 5e-6
|
MrAnderson/nystrom-2048-full-trivia-copied-embeddings | 3ac4a97a880ab466498e1f70a7dd843858c07662 | 2022-03-12T11:15:46.000Z | [
"pytorch",
"nystromformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | MrAnderson | null | MrAnderson/nystrom-2048-full-trivia-copied-embeddings | 3 | null | transformers | 21,995 | Entry not found |
StivenLancheros/Roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_en_es | 3e8fd15e623120d7cd29441adb1d9ea03bc05fe1 | 2022-03-12T11:39:55.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/Roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_en_es | 3 | null | transformers | 21,996 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_en_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_en_es
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision: 0.8664
- Recall: 0.8587
- F1: 0.8625
- Accuracy: 0.9727
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in Spanish and English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0564 | 1.0 | 1360 | 0.1459 | 0.8296 | 0.8489 | 0.8392 | 0.9696 |
| 0.0222 | 2.0 | 2720 | 0.1554 | 0.8650 | 0.8320 | 0.8482 | 0.9702 |
| 0.0124 | 3.0 | 4080 | 0.1670 | 0.8588 | 0.8564 | 0.8576 | 0.9717 |
| 0.0052 | 4.0 | 5440 | 0.1750 | 0.8664 | 0.8587 | 0.8625 | 0.9727 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-gpt2-medium-no-adapter-long-run | 02bb4cd5bb608c02a4aad3ae9ecd087e15bacc17 | 2022-03-13T17:42:05.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-gpt2-medium-no-adapter-long-run | 3 | null | transformers | 21,997 | Entry not found |
test1345/autonlp-savesome-631818261 | 6b6859a39ff9c74984f3471eb642f5807d529fc0 | 2022-03-12T19:00:24.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:test1345/autonlp-data-savesome",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | test1345 | null | test1345/autonlp-savesome-631818261 | 3 | null | transformers | 21,998 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- test1345/autonlp-data-savesome
co2_eq_emissions: 5.714250590300453
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 631818261
- CO2 Emissions (in grams): 5.714250590300453
## Validation Metrics
- Loss: 0.44651690125465393
- Accuracy: 0.8792873051224944
- Macro F1: 0.839261602941426
- Micro F1: 0.8792873051224943
- Weighted F1: 0.8790427387522044
- Macro Precision: 0.8407634723656228
- Micro Precision: 0.8792873051224944
- Weighted Precision: 0.8801219917819031
- Macro Recall: 0.8400328140795883
- Micro Recall: 0.8792873051224944
- Weighted Recall: 0.8792873051224944
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/test1345/autonlp-savesome-631818261
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("test1345/autonlp-savesome-631818261", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("test1345/autonlp-savesome-631818261", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
anwesham/indicbert_ur | bb2701a942b95cbf6c424ee4e3072d6f642e2589 | 2022-03-13T07:58:10.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | anwesham | null | anwesham/indicbert_ur | 3 | null | transformers | 21,999 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.