modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
daveni/aesthetic_attribute_classifier | f0e40ce6ccbfd31e1dd3e4ac2bbcc0d6bb2e86a7 | 2022-04-12T14:11:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | daveni | null | daveni/aesthetic_attribute_classifier | 20 | null | transformers | 8,400 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: aesthetic_attribute_classifier
results: []
widget:
- text: Check your vertical on the main support; it looks a little off. I'd also like to see how it looks with a bit of the sky cropped from the photo
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aesthetic_attribute_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [PCCD dataset](https://github.com/ivclab/DeepPhotoCritic-ICCV17).
It achieves the following results on the evaluation set:
- Loss: 0.3976
- Precision: {'precision': 0.877129341279301}
- Recall: {'recall': 0.8751381215469614}
- F1: {'f1': 0.875529982855803}
- Accuracy: {'accuracy': 0.8751381215469614}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|
| 0.452 | 1.0 | 1528 | 0.4109 | {'precision': 0.8632779077963935} | {'recall': 0.8615101289134438} | {'f1': 0.8618616182904953} | {'accuracy': 0.8615101289134438} |
| 0.3099 | 2.0 | 3056 | 0.3976 | {'precision': 0.877129341279301} | {'recall': 0.8751381215469614} | {'f1': 0.875529982855803} | {'accuracy': 0.8751381215469614} |
| 0.227 | 3.0 | 4584 | 0.4320 | {'precision': 0.876211408446225} | {'recall': 0.874401473296501} | {'f1': 0.8747427955387239} | {'accuracy': 0.874401473296501} |
| 0.1645 | 4.0 | 6112 | 0.4840 | {'precision': 0.8724641667216837} | {'recall': 0.8714548802946593} | {'f1': 0.8714577820909117} | {'accuracy': 0.8714548802946593} |
| 0.1141 | 5.0 | 7640 | 0.5083 | {'precision': 0.8755445355051571} | {'recall': 0.8747697974217311} | {'f1': 0.8748766125899489} | {'accuracy': 0.8747697974217311} |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Intel/electra-small-discriminator-mrpc | 2e5e4c9ba48e5e6ea6a1ff8e62c8ce6092c20a87 | 2022-04-21T14:33:49.000Z | [
"pytorch",
"electra",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Intel | null | Intel/electra-small-discriminator-mrpc | 20 | null | transformers | 8,401 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: electra-small-discriminator-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8529411764705882
- name: F1
type: f1
value: 0.8983050847457628
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-mrpc
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3909
- Accuracy: 0.8529
- F1: 0.8983
- Combined Score: 0.8756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
emilylearning/finetuned_cgp_added_birth_date__female_weight_1.5__test_run_False__p_dataset_100 | 2d0ca5e706ad907c97899755c4a3ce65cbf5de35 | 2022-04-21T19:20:29.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/finetuned_cgp_added_birth_date__female_weight_1.5__test_run_False__p_dataset_100 | 20 | null | transformers | 8,402 | Entry not found |
emilylearning/finetuned_cgp_added_none__female_weight_1.5__test_run_False__p_dataset_100 | c3f2f6ca2d33c53069cf8ffa03a2ca4f48be49d7 | 2022-04-21T22:08:18.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/finetuned_cgp_added_none__female_weight_1.5__test_run_False__p_dataset_100 | 20 | null | transformers | 8,403 | Entry not found |
emilylearning/finetuned_cgp_added_name__female_weight_1.5__test_run_False__p_dataset_100 | 4894ac0aba21d5fe469ed8749d00f97ebb8958b0 | 2022-04-22T02:55:03.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/finetuned_cgp_added_name__female_weight_1.5__test_run_False__p_dataset_100 | 20 | null | transformers | 8,404 | Entry not found |
tosin/dialogpt_mwoz_idioms | 55ead19d76a937ed3e7e86dbc1861c4a886b5f32 | 2022-04-24T16:32:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:multi_woz_v22 and PIE-English idioms corpus",
"transformers",
"conversational",
"license:cc-by-4.0"
] | conversational | false | tosin | null | tosin/dialogpt_mwoz_idioms | 20 | null | transformers | 8,405 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- conversational
- transformers
datasets:
- multi_woz_v22 and PIE-English idioms corpus
metrics:
- perplexity
widget:
- text: "Does that mean Jane is off the hook?"
---
## DialoGPT_MWOZ_Idioms
This is a fine-tuned model of DialoGPT (medium)-MultiWOZ on the PIE-English idioms corpus. It is intended to be used as an idiom-aware conversational system.
The dataset it's trained on is limited in scope, as it covers only 10 classes of idioms ( metaphor, simile, euphemism, parallelism, personification, oxymoron, paradox, hyperbole, irony and literal).
The perplexity achieved on the test and validation sets are 160.9 and 160.46, respectively. More information about the original pre-trained model can be found [here](https://huggingface.co/tosin/dialogpt_mwoz)
* Generation example from an interactive environment:
|Role | Response |
|---------|------------|
|User | remind her to take it off the hook. |
|Bot | the bookie took it on the nod |
|User | A lengthy break is in order ; so make sure you keep your nose clean |
|Bot | they are under a cloud because of fears about US President's wife Hillary Clinton |
|User | Some families breed like rabbits and send us all their male heirs |
|Bot | he was out for the count |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_mwoz_idioms")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_mwoz_idioms")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT_MWOZ_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
emilylearning/finetuned_cgp_add_none__f_weight_5__p_dataset_100__test_False | 4c028521670a182381d36a3413775b734afd0ab1 | 2022-04-25T00:06:55.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/finetuned_cgp_add_none__f_weight_5__p_dataset_100__test_False | 20 | null | transformers | 8,406 | Entry not found |
emilylearning/finetuned_cgp_add_name__f_weight_5__p_dataset_100__test_False | 25a393447c1ba0a97e2debd0823d452264b945d4 | 2022-04-25T00:07:02.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/finetuned_cgp_add_name__f_weight_5__p_dataset_100__test_False | 20 | null | transformers | 8,407 | Entry not found |
AbhiNaiky/finetuning-sentiment-model-3000-samples | bc46d902f1dec3454375a24bd30688bf87adcf24 | 2022-04-28T22:34:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | AbhiNaiky | null | AbhiNaiky/finetuning-sentiment-model-3000-samples | 20 | null | transformers | 8,408 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3170
- Accuracy: 0.8733
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/CoverLetter | 2d8e4a1700d5ceec351656112200afafa52a7e09 | 2022-04-30T01:42:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/CoverLetter | 20 | null | transformers | 8,409 | how to do initial prompt:
captivated by [Enter Company Name]'s
also trained on: https://huggingface.co/BigSalmon/InformalToFormalLincoln40 (so you can use those prompt outlines, too) |
h4d35/Translator | 83862843027e6a4249bf7c07e75ef1e6fb47dd9f | 2022-05-01T19:35:32.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | h4d35 | null | h4d35/Translator | 20 | null | transformers | 8,410 | Entry not found |
SebastianS/bert-finetuned-ner | 650f0ce0b40cfc7cecf60289a563293724c97db6 | 2022-05-01T21:38:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | SebastianS | null | SebastianS/bert-finetuned-ner | 20 | null | transformers | 8,411 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Accuracy
type: accuracy
value: 0.9910634321093416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0452
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0544 | 1.0 | 1756 | 0.0440 | 0.9892 |
| 0.0246 | 2.0 | 3512 | 0.0417 | 0.9906 |
| 0.0105 | 3.0 | 5268 | 0.0452 | 0.9911 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
anshr/distilgpt2_reward_model_final | d32c60ac1c319a507174d1121b961a929d0fb6c9 | 2022-05-02T22:15:34.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | anshr | null | anshr/distilgpt2_reward_model_final | 20 | null | transformers | 8,412 | Entry not found |
HiTZ/A2T_RoBERTa_SMFA_ACE-arg_WikiEvents-arg | ef1f48f956c26165e1b16baf2c8cf431dff8be34 | 2022-05-08T23:09:26.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:snli",
"dataset:anli",
"dataset:multi_nli",
"dataset:multi_nli_mismatch",
"dataset:fever",
"arxiv:2104.14690",
"arxiv:2203.13602",
"transformers",
"zero-shot-classification"
] | zero-shot-classification | false | HiTZ | null | HiTZ/A2T_RoBERTa_SMFA_ACE-arg_WikiEvents-arg | 20 | null | transformers | 8,413 | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` |
sileod/roberta-base-discourse-marker-prediction | 43ddf97b2f12eb3299cf1276b5e349b1e37099a0 | 2022-05-11T13:06:29.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:discovery",
"transformers",
"discourse-marker-prediction",
"discourse-connective-prediction",
"discourse-connective",
"discourse-marker",
"discourse-relation-prediction",
"pragmatics",
"discourse",
"license:apache-2.0"
] | text-classification | false | sileod | null | sileod/roberta-base-discourse-marker-prediction | 20 | 2 | transformers | 8,414 | ---
language:
- en
tags:
- discourse-marker-prediction
- discourse-connective-prediction
- discourse-connective
- discourse-marker
- discourse-relation-prediction
- pragmatics
- discourse
license: apache-2.0
datasets:
- discovery
metrics:
- accuracy
widget:
- text: "But no, Amazon selling 3D printers is not new.</s></s>The promise of 3D printing is very great."
---
# Discourse marker prediction / discourse connective prediction pretrained model
`roberta-base` pretrained on discourse marker prediction on the Discovery dataset with a validation accuracy of 30.93% (majority class is 0.57%)
https://github.com/sileod/discovery
https://huggingface.co/datasets/discovery
This model can also be used as a pretrained model for NLU, pragmatics and discourse tasks
## Citing & Authors
```bibtex
@inproceedings{sileo-etal-2019-mining,
title = "Mining Discourse Markers for Unsupervised Sentence Representation Learning",
author = "Sileo, Damien and
Van De Cruys, Tim and
Pradel, Camille and
Muller, Philippe",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1351",
doi = "10.18653/v1/N19-1351",
pages = "3477--3486",
}
``` |
UGARIT/grc-alignment | 7ecf8d2f809582a404565c68b2f7f5c7ad4307c4 | 2022-07-07T08:53:38.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | UGARIT | null | UGARIT/grc-alignment | 20 | null | transformers | 8,415 | ---
license: cc-by-4.0
---
# Automatic Translation Alignment of Ancient Greek Texts
GRC-ALIGNMENT model is an XLM-RoBERTa-based model, fine-tuned for automatic multilingual text alignment at the word level.
The model is trained on 12 million monolingual ancient Greek tokens with Masked Language Model (MLM) training objective. Further, the model is fine-tuned on 45k parallel sentences, mainly in ancient Greek-English, Greek-Latin, and Greek-Georgian.
### Multilingual Training Dataset
| Languages |Sentences | Source |
|:---------------------------------------|:-----------:|:--------------------------------------------------------------------------------|
| GRC-ENG | 32.500 | Perseus Digital Library (Iliad, Odyssey, Xenophon, New Testament) |
| GRC-LAT | 8.200 | [Digital Fragmenta Historicorum Graecorum project](https://www.dfhg-project.org/) |
| GRC-KAT <br>GRC-ENG <br>GRC-LAT<br>GRC-ITA<br>GRC-POR | 4.000 | [UGARIT Translation Alignment Editor](https://ugarit.ialigner.com/ ) |
### Model Performance
| Languages | Alignment Error Rate |
|:---------:|:--------------------:|
| GRC-ENG | 19.73% (IterMax) |
| GRC-POR | 23.91% (IterMax) |
| GRC-LAT | 10.60% (ArgMax) |
The gold standard datasets are available on [Github](https://github.com/UgaritAlignment/Alignment-Gold-Standards).
If you use this model, please cite our papers:
<pre>
@InProceedings{yousef-EtAl:2022:LREC,
author = {Yousef, Tariq and Palladino, Chiara and Shamsian, Farnoosh and d’Orange Ferreira, Anise and Ferreira dos Reis, Michel},
title = {An automatic model and Gold Standard for translation alignment of Ancient Greek},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5894--5905},
url = {https://aclanthology.org/2022.lrec-1.634}
}
@InProceedings{yousef-EtAl:2022:LT4HALA2022,
author = {Yousef, Tariq and Palladino, Chiara and Wright, David J. and Berti, Monica},
title = {Automatic Translation Alignment for Ancient Greek and Latin},
booktitle = {Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {101--107},
url = {https://aclanthology.org/2022.lt4hala2022-1.14}
}
</pre> |
LiYuan/Amazon-Cup-Cross-Encoder-Regression | 68bac3a580ee111489090067d060eefd8f81475b | 2022-05-08T17:45:01.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | LiYuan | null | LiYuan/Amazon-Cup-Cross-Encoder-Regression | 20 | null | transformers | 8,416 | ---
license: afl-3.0
---
This model is actually very accurate for this rerank products given one query, intuitively inspired by information retrieval techniques. In 2019, Nils Reimers and Iryna Gurevych introduced a new transformers model called Sentence-BERT, Sentence Embeddings using Siamese BERT-Networks. The model is introduced by this paper https://doi.org/10.48550/arxiv.1908.10084.
This new Sentence-BERT model is modified on the BERT model by adding a pooling operation to the output of BERT model. In such a way, it can output a fixed size of the sentence embedding to calculate cosine similarity, and so on. To obtain a meaningful sentence embedding in a sentence vector space where similar or pairwise sentence embedding are close, they created a triplet network to modify the BERT model as the architecture below figure.

# Download and Use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("LiYuan/Amazon-Cup-Cross-Encoder-Regression")
model = AutoModelForSequenceClassification.from_pretrained("LiYuan/Amazon-Cup-Cross-Encoder-Regression")
```
As we can observe from above figure, a pooling layer is added on the top of each BERT Model to obtain the sentence embedding $u$ and $v$. Finally, the cosine similarity between $u$ and $v$ can be computed to compare with the true score or make them semantically meaningful, then the mean square error loss, which is the objective function, can be backpropagated through this BERT network model to update the weights.
In our amazon case, the query is sentence A and concatenated product attributes are sentence B. We also stratified split the merged set into **571,223** rows for training, **500** rows for validation, **3,000** rows for test. We limited the output score between 0 and 1. The following scores represent the degree of relevance between the query and the product attributes in light of Amazon KDD Cup website; however, this can be adjusted to improve the model performance.
- 1: exact
- 0.1: substitute
- 0.01: complement
- 0: irrelevance
For this regression model, we used Pearson correlation coefficient and Spearman's rank correlation coefficient} to measure the model performance. If the correlation coefficient is high, the model performs well. The validation Pearson is \textbf{0.5670} and validation Spearman is \textbf{0.5662}. This is not bad result.
We also evaluated the model on the test set. We got **0.5321** for Pearson and **0.5276** for Spearman. These results from the test evaluation have results similar to those of the validation set, suggesting that the model has a good generalization.
Finally, once we have this fine-tuned Cross-Encoder Regression model, given a new query and its matched product list, we can feed them into this model to get the output score to rerank them so that this can improve the customer online shopping experience. |
nirajsaran/AdTextGenerator | 3fbf8f98d7716470d5e0c453514370350add20e1 | 2022-05-16T21:45:00.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | nirajsaran | null | nirajsaran/AdTextGenerator | 20 | 1 | transformers | 8,417 | ---
license: mit
inference:
parameters:
temperature: 0.7
use_cache: false
max_length: 200
top_k: 5
top_p: 0.9
widget:
- text: "Sony TV"
example_title: "Amazon Ad text Electronics"
- text: "Apple Watch"
example_title: "Amazon Ad text Wearables"
- text: "Last minute shopping for Samsung headphones for"
example_title: "Ads for shopping deals"
- text: "Labor Day discounts for"
example_title: "Ads for Holiday deals"
metrics:
- bleu
---
Generates Ad text copy, for ads for Amazon shopping (fine tuned for electronics and wearables).
The model is fine tuned on the EleutherAI/gpt-neo-125M model using the Amazon Ads dataset.
**Usage Examples:**
Select from among the examples in the dropdown or enter your own prompts.
You can try entering brand and product names like Samsung Galaxy to see the ad text generator in action.
Feel free to play around with native Amazon ads which are product descriptions like:
**Sony** BDPS3700 Streaming Blu-Ray Disc Player with Wi-Fi (Black).
**AmazonBasics** TV Trolley for 24-43" TVs with Swivel Feature.
Or try additional ad formats, similar to other shopping sites for holiday deals, like:
**Big savings on the new** Roku Streaming Device
**Mothers Day discounts for** Apple Watch Wireless Charger USB Charging Cable
**Last minute shopping for Samsung headphones for**
**Model Performance:**
The model does quite well on the Electronics and Wearables categories on which it has been fine-tuned. There are, however, occasional hallucinations, though the ad copy is mostly coherent.
In other domains, it doesn't do quite as well...
Halloween Tesla is
Honda on sale
|
SreyanG-NVIDIA/bert-base-cased-finetuned-ner | 0ef1c11debe8c6c4821f67e6b0854872dd7e9685 | 2022-05-10T10:05:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | SreyanG-NVIDIA | null | SreyanG-NVIDIA/bert-base-cased-finetuned-ner | 20 | null | transformers | 8,418 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9325301204819277
- name: Recall
type: recall
value: 0.9374663556432801
- name: F1
type: f1
value: 0.9349917229654156
- name: Accuracy
type: accuracy
value: 0.9840466238888562
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0650
- Precision: 0.9325
- Recall: 0.9375
- F1: 0.9350
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2346 | 1.0 | 878 | 0.0722 | 0.9168 | 0.9217 | 0.9192 | 0.9795 |
| 0.0483 | 2.0 | 1756 | 0.0618 | 0.9299 | 0.9370 | 0.9335 | 0.9837 |
| 0.0262 | 3.0 | 2634 | 0.0650 | 0.9325 | 0.9375 | 0.9350 | 0.9840 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BK-V/xlm-roberta-base-finetuned-peyma-fa | fd069b61fcae27bff58bf7230573a6d2aaf6331c | 2022-06-29T20:59:53.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | BK-V | null | BK-V/xlm-roberta-base-finetuned-peyma-fa | 20 | null | transformers | 8,419 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-peyma-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-peyma-fa
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0937
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1562 | 1.0 | 998 | 0.0691 | 0.8777 |
| 0.0638 | 2.0 | 1996 | 0.0703 | 0.8908 |
| 0.0457 | 3.0 | 2994 | 0.0645 | 0.8975 |
| 0.0281 | 4.0 | 3992 | 0.0842 | 0.8994 |
| 0.0206 | 5.0 | 4990 | 0.0651 | 0.9164 |
| 0.0139 | 6.0 | 5988 | 0.0787 | 0.9148 |
| 0.0083 | 7.0 | 6986 | 0.0838 | 0.9253 |
| 0.0052 | 8.0 | 7984 | 0.0833 | 0.9221 |
| 0.0031 | 9.0 | 8982 | 0.0947 | 0.9230 |
| 0.0028 | 10.0 | 9980 | 0.0937 | 0.9249 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_42 | 3e9e0a1ae012c0e41c07232482391768dfcfb4fe | 2022-05-10T23:26:16.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_42 | 20 | null | transformers | 8,420 | Entry not found |
binay1999/bert-for-text-classification | cc658f46eba2e97c74fee52e7e2c6d9248934b3f | 2022-05-12T04:26:24.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | binay1999 | null | binay1999/bert-for-text-classification | 20 | null | transformers | 8,421 | Entry not found |
Armor/EmergencyNews_BERT_Base | 9322ddeba8ce4b97353ef546dd860c0cbb93e61a | 2022-05-18T10:45:30.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Armor | null | Armor/EmergencyNews_BERT_Base | 20 | null | transformers | 8,422 | ---
license: apache-2.0
---
|
fabianmmueller/deep-haiku-gpt-2 | 7f7b35790f6bfca9051ca716920199fb09c1f42c | 2022-05-24T20:42:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | fabianmmueller | null | fabianmmueller/deep-haiku-gpt-2 | 20 | 0 | transformers | 8,423 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deep-haiku-gpt-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deep-haiku-gpt-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [haiku](https://huggingface.co/datasets/statworx/haiku) dataset.
## Model description
The model is a fine-tuned version of GPT-2 for generation of [Haikus](https://en.wikipedia.org/wiki/Haiku). The model, data and training procedure is inspired by a [blog post by Robert A. Gonsalves](https://towardsdatascience.com/deep-haiku-teaching-gpt-j-to-compose-with-syllable-patterns-5234bca9701). Instead of using a 8bit version of GPT-J 6B, we instead used vanilla GPT-2. From what we saw, the model performance comparable but is much easier to fine-tune.
We used the same multitask training approach as in der post, but significantly extended the dataset (almost double the size of the original on). A prepared version of the dataset can be found [here](https://huggingface.co/datasets/statworx/haiku).
## Intended uses & limitations
The model is intended to generate Haikus. To do so, it was trained using a multitask learning approach (see [Caruana 1997](http://www.cs.cornell.edu/~caruana/mlj97.pdf)) with the following four different tasks: :
- topic2graphemes `(keywords = text)`
- topic2phonemes `<keyword_phonemes = text_phonemes>`
- graphemes2phonemes `[text = text_phonemes]`
- phonemes2graphemes `{text_phonemes = text}`
To use the model, use an appropriate prompt like `"(dog rain ="` and let the model generate a Haiku given the keyword.
## Training and evaluation data
We used a collection of existing haikus for training. Furthermore, all haikus were used in their graphemes version as well as a phonemes version. In addition, we extracted key word for all haikus using [KeyBERT](https://github.com/MaartenGr/KeyBERT) and sorted out haikus with a low text quality according to the [GRUEN score](https://github.com/WanzhengZhu/GRUEN).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
cradle-bio/tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert | a19823ec86e6afb7a51cb29a469b8fa792153ea8 | 2022-05-30T16:24:59.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:train",
"transformers",
"protein language model",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | cradle-bio | null | cradle-bio/tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert | 20 | null | transformers | 8,424 | ---
license: apache-2.0
tags:
- protein language model
- generated_from_trainer
datasets:
- train
metrics:
- spearmanr
model-index:
- name: tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: cradle-bio/tape-fluorescence
type: train
metrics:
- name: Spearmanr
type: spearmanr
value: 0.5505486770316164
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tape-fluorescence-prediction-tape-fluorescence-evotuning-DistilProtBert
This model is a fine-tuned version of [thundaa/tape-fluorescence-evotuning-DistilProtBert](https://huggingface.co/thundaa/tape-fluorescence-evotuning-DistilProtBert) on the cradle-bio/tape-fluorescence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3377
- Spearmanr: 0.5505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 2560
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 6.2764 | 0.93 | 7 | 1.9927 | -0.0786 |
| 1.1206 | 1.93 | 14 | 0.8223 | -0.1543 |
| 0.8054 | 2.93 | 21 | 0.6894 | 0.2050 |
| 0.7692 | 3.93 | 28 | 0.8084 | 0.2807 |
| 0.7597 | 4.93 | 35 | 0.6613 | 0.4003 |
| 0.7416 | 5.93 | 42 | 0.6803 | 0.3829 |
| 0.7256 | 6.93 | 49 | 0.6428 | 0.4416 |
| 0.6966 | 7.93 | 56 | 0.6086 | 0.4506 |
| 0.7603 | 8.93 | 63 | 0.9119 | 0.4697 |
| 0.9187 | 9.93 | 70 | 0.6048 | 0.4757 |
| 1.0371 | 10.93 | 77 | 2.0742 | 0.4076 |
| 1.0947 | 11.93 | 84 | 0.6633 | 0.4522 |
| 0.6946 | 12.93 | 91 | 0.6008 | 0.4123 |
| 0.6618 | 13.93 | 98 | 0.5931 | 0.4457 |
| 0.8635 | 14.93 | 105 | 1.9561 | 0.4331 |
| 0.9444 | 15.93 | 112 | 0.5627 | 0.5041 |
| 0.5535 | 16.93 | 119 | 0.4348 | 0.4840 |
| 0.9059 | 17.93 | 126 | 0.6704 | 0.5123 |
| 0.5693 | 18.93 | 133 | 0.4616 | 0.5285 |
| 0.6298 | 19.93 | 140 | 0.6915 | 0.5166 |
| 0.955 | 20.93 | 147 | 0.6679 | 0.5677 |
| 0.7866 | 21.93 | 154 | 0.8136 | 0.5559 |
| 0.6687 | 22.93 | 161 | 0.4782 | 0.5561 |
| 0.5336 | 23.93 | 168 | 0.4447 | 0.5499 |
| 0.4673 | 24.93 | 175 | 0.4258 | 0.5428 |
| 0.478 | 25.93 | 182 | 0.3651 | 0.5329 |
| 0.4023 | 26.93 | 189 | 0.3688 | 0.5428 |
| 0.3961 | 27.93 | 196 | 0.3692 | 0.5509 |
| 0.3808 | 28.93 | 203 | 0.3434 | 0.5514 |
| 0.3433 | 29.93 | 210 | 0.3377 | 0.5505 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
facebook/mcontriever | c9ea3fd6b5c96290b4741884525afdb40cfa9932 | 2022-05-29T08:58:37.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | facebook | null | facebook/mcontriever | 20 | 1 | transformers | 8,425 | Entry not found |
huggingtweets/billieeilish-nakedbibii-unjaded_jade | 1fd577347339be0e1c39b324796c436929da246e | 2022-05-30T21:39:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/billieeilish-nakedbibii-unjaded_jade | 20 | null | transformers | 8,426 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1387065127208247299/bni08CVZ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1530362441741217795/jxWqrgn5_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1105554414427885569/XkyfcoMJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">billie eilish & BIBI & Jade Bowler</div>
<div style="text-align: center; font-size: 14px;">@billieeilish-nakedbibii-unjaded_jade</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from billie eilish & BIBI & Jade Bowler.
| Data | billie eilish | BIBI | Jade Bowler |
| --- | --- | --- | --- |
| Tweets downloaded | 943 | 3230 | 3171 |
| Retweets | 260 | 134 | 122 |
| Short tweets | 15 | 891 | 120 |
| Tweets kept | 668 | 2205 | 2929 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36li8v9h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @billieeilish-nakedbibii-unjaded_jade's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2x4m00nv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2x4m00nv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/billieeilish-nakedbibii-unjaded_jade')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jakka/segformer-b0-finetuned-warehouse-part-1-V2 | 8b39725d4360abb046f96bf618ad19a8dbcb6209 | 2022-05-31T23:31:49.000Z | [
"pytorch",
"segformer",
"transformers",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-segmentation | false | jakka | null | jakka/segformer-b0-finetuned-warehouse-part-1-V2 | 20 | null | transformers | 8,427 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-warehouse-part-1-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-warehouse-part-1-V2
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the jakka/warehouse_part1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2737
- Mean Iou: 0.7224
- Mean Accuracy: 0.8119
- Overall Accuracy: 0.9668
- Per Category Iou: [0.0, 0.9392313580983768, 0.9322932027111482, 0.9772249946988713, 0.8749950826812657, 0.9591121585348171, 0.9803780030124933, 0.8554852055380204, 0.9661475962866876, 0.5609089467958914, 0.0, 0.8095003013989066, 0.7113799121381718, 0.8927260044840537, 0.6133653057361015, 0.8420100377966416, 0.33841086205511367, 0.553361761785151, 0.8141592920353983, 0.8270316181708587]
- Per Category Accuracy: [nan, 0.9727824725573769, 0.9676994291705018, 0.9882968957337019, 0.9679484011220059, 0.9772700079950366, 0.9882492205666621, 0.9252107983136135, 0.9825945071781523, 0.6062795795494159, 0.0, 0.894776445179671, 0.7968855332344613, 0.9522349792248335, 0.6544510171692397, 0.9276157710790738, 0.42203029817249116, 0.5863404454740788, 0.8963814834175524, 0.9193914381006046]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.7008 | 1.0 | 787 | 0.2473 | 0.5595 | 0.6448 | 0.9325 | [0.0, 0.8572456184869756, 0.8403481284744914, 0.9524827531570127, 0.7992052152702355, 0.9196710216877864, 0.9471503664300267, 0.6193304552041781, 0.9133086982125345, 0.17558267725303728, 0.0, 0.6344520667741999, 0.3360920970752956, 0.7642426437536942, 0.510575871022846, 0.6056988833269157, 0.021209386281588447, 0.27355691497341356, 0.6138181818181818, 0.40645271873846317] | [nan, 0.9155298033269351, 0.9463379226245591, 0.978836265135544, 0.9240214201112357, 0.9448111967681583, 0.9643622308798924, 0.6930912552699579, 0.9497575640760723, 0.18632531152693993, 0.0, 0.7500919033177098, 0.36409599568558715, 0.8900647437729461, 0.5728964730263244, 0.6549871668851026, 0.02166159025328631, 0.2902301645548354, 0.7353197421153511, 0.4694729147312794] |
| 0.1321 | 2.0 | 1574 | 0.2331 | 0.6221 | 0.7115 | 0.9457 | [0.0, 0.8970560279823083, 0.8791120244598839, 0.9603620467193393, 0.8160602187615088, 0.934767875213888, 0.9616837752836253, 0.7419391385825133, 0.9351874201394574, 0.26717521084051926, 0.0, 0.6985475965645938, 0.43481867741170893, 0.8134984418163408, 0.5459611126448698, 0.7401712453141447, 0.13175924760380514, 0.355121624272543, 0.7060811650388926, 0.6229231428877693] | [nan, 0.951233770160613, 0.9409053657605947, 0.9843213861494523, 0.9219686102230917, 0.9665968250506056, 0.9829729958024298, 0.8238168094655243, 0.9620596605954946, 0.29986351309033543, 0.0, 0.8030913978494624, 0.49467439665633006, 0.909599171191769, 0.5931253087796156, 0.8208142201834863, 0.14682189804424495, 0.3841705499014086, 0.8251147122030551, 0.70800907664895] |
| 0.1085 | 3.0 | 2361 | 0.2457 | 0.6542 | 0.7530 | 0.9521 | [0.0, 0.9079405116712079, 0.8959028018194484, 0.9654330936322201, 0.8358564096747072, 0.942169826126924, 0.967131589172387, 0.7785683188874377, 0.942506044201895, 0.3544242514524058, 0.0, 0.7247706422018348, 0.5044915351836923, 0.8273089178892802, 0.5630444261421442, 0.7399785788281565, 0.21738423517169614, 0.46725284186024263, 0.7218755768875762, 0.7280122150607375] | [nan, 0.9545620491089126, 0.9497321958018098, 0.9837544714508515, 0.9402501375924134, 0.9686463320401577, 0.9809467909731419, 0.8694886440908473, 0.9735407105395524, 0.3936199755387097, 0.0, 0.8558151824280856, 0.5906026695429419, 0.9157369138435157, 0.6097401660523865, 0.8630406290956749, 0.2679143956396281, 0.5182902566913956, 0.8517163268862171, 0.8205229733639949] |
| 0.8409 | 4.0 | 3148 | 0.2533 | 0.6749 | 0.7760 | 0.9559 | [0.0, 0.912375840411698, 0.904072054206276, 0.9676067299522242, 0.900289256120933, 0.9448264254043457, 0.9706472863960092, 0.7942658684379895, 0.9498265874428659, 0.5556284571729604, 0.0, 0.743214707471828, 0.529188361408882, 0.7269154778675782, 0.5697874335729916, 0.7702618169892564, 0.2288491765188273, 0.5089612784265519, 0.757448678510892, 0.7646070737475812] | [nan, 0.9601569621727435, 0.9525397945710891, 0.9830820784511696, 0.9462795897530819, 0.9732812778343284, 0.9810361205428978, 0.8895280837753298, 0.9743959070958451, 0.6854951638729194, 0.0, 0.8531327543424317, 0.5823783200755023, 0.9177828280607646, 0.6184135395216047, 0.8657506006989952, 0.26841535748637385, 0.5491586570344761, 0.8759801359121798, 0.8665306184609293] |
| 0.0655 | 5.0 | 3935 | 0.2164 | 0.6815 | 0.7909 | 0.9577 | [0.0, 0.9195724102825147, 0.8817887152896982, 0.9692666162636345, 0.90446655617651, 0.9477266300807918, 0.972197851990263, 0.8006212298550464, 0.9526181996158507, 0.48675750740382695, 0.0, 0.7544064333927534, 0.589975775752682, 0.8568833610473964, 0.5739430151581254, 0.7804109001873066, 0.2738491187715644, 0.46180522107696753, 0.7493122891746226, 0.754828899421902] | [nan, 0.9629768162749704, 0.9511904548979574, 0.9855793956741679, 0.9532853326979632, 0.9705567416728694, 0.9856702233410021, 0.9070277437780497, 0.9761803883026475, 0.7497090051817757, 0.0, 0.8653903593419723, 0.689564513954429, 0.9349779882164135, 0.6119830537374903, 0.9072670926168632, 0.3530779095864059, 0.5086786980626564, 0.8741215078120462, 0.8391483788434887] |
| 0.0568 | 6.0 | 4722 | 0.2803 | 0.6876 | 0.7839 | 0.9591 | [0.0, 0.9166100071412383, 0.913602419181271, 0.9710201737288663, 0.8563050555469198, 0.9497657746314072, 0.9730697054916811, 0.8143688646719719, 0.9549812903957364, 0.460486150973965, 0.0, 0.7634781269254467, 0.6136748147716002, 0.8542174198928293, 0.5922937831600485, 0.8066394260877113, 0.28399126278134795, 0.5207639813581891, 0.7629174644376197, 0.7438457521999924] | [nan, 0.9601927982852421, 0.9660710264704008, 0.982455068550298, 0.957830657460364, 0.9688535013815731, 0.9819961506837456, 0.893842649258806, 0.9749506995826178, 0.5071640856263331, 0.0, 0.8540977391783844, 0.7091141971147364, 0.9317785850902456, 0.653052819349169, 0.8880378986456968, 0.35953029817249116, 0.553305686470427, 0.862098507289307, 0.8895268263710157] |
| 0.8994 | 7.0 | 5509 | 0.2743 | 0.6868 | 0.7764 | 0.9606 | [0.0, 0.92180556388016, 0.9171201062365498, 0.9721111956032598, 0.8587950800137758, 0.9513526631552707, 0.9756092701000854, 0.819792597945916, 0.9576544961199075, 0.4512109977539036, 0.0, 0.7723053199691596, 0.61351217088922, 0.8696959538394335, 0.5947007494875557, 0.8068989910272162, 0.2400942828140323, 0.49048112386556714, 0.772383338067815, 0.7496112574696395] | [nan, 0.9644998510561574, 0.9609472275076806, 0.9854828942497743, 0.9565172529563908, 0.9753485051500238, 0.9840922427646661, 0.8947674418604651, 0.974328764760461, 0.49258184783186704, 0.0, 0.8630410807830162, 0.6660374814615073, 0.9410600831006661, 0.6446391486645419, 0.8876351572739187, 0.2796369028534787, 0.5232773027508334, 0.8685891851077423, 0.8883389427836073] |
| 0.0757 | 8.0 | 6296 | 0.2245 | 0.7038 | 0.8009 | 0.9625 | [0.0, 0.9246349181813107, 0.9204571437331909, 0.9735757462990084, 0.8677796689121399, 0.9529629595462734, 0.9762280475446855, 0.8249549577060494, 0.9591099123245741, 0.6276133447390932, 0.0, 0.7755030368136181, 0.6490189248809939, 0.8729206918730364, 0.598100700980074, 0.8000277974172574, 0.27374031814774713, 0.5049971433066432, 0.7770387696167466, 0.7981819415236415] | [nan, 0.964623037692871, 0.9637122903759715, 0.9863849456780516, 0.9537638293913148, 0.974798022498043, 0.985726579790157, 0.9184958520331837, 0.980103295010109, 0.7586190597174544, 0.0, 0.8624896608767576, 0.7536739921801268, 0.9379994558884956, 0.6446181625809385, 0.9037175076452599, 0.32931227957678744, 0.5392729877180727, 0.863477957832375, 0.8959383518876689] |
| 0.0638 | 9.0 | 7083 | 0.2660 | 0.7091 | 0.8064 | 0.9632 | [0.0, 0.9247942993361187, 0.9227547653133065, 0.9737952169757659, 0.8675395458562903, 0.954005651357167, 0.9771936329793919, 0.832432130071599, 0.960664758331238, 0.6439555818513429, 0.0, 0.7800093558353167, 0.6503190735050816, 0.8771838558892437, 0.6000063410406786, 0.8135397086825815, 0.29345229389108285, 0.5278915956856804, 0.7979207701237885, 0.7849771726504039] | [nan, 0.9696983271254734, 0.9626331855239437, 0.9865491477141318, 0.9580933383611586, 0.9736782563602464, 0.9877136372491695, 0.9107507139942881, 0.9774734570720269, 0.778129006717992, 0.0, 0.8715651135005974, 0.7419441822839423, 0.9522322311869326, 0.6453719127503574, 0.9070076998689384, 0.36183472266752165, 0.5638987382066087, 0.8882354649474357, 0.8850494190030915] |
| 0.1028 | 10.0 | 7870 | 0.2753 | 0.7045 | 0.7986 | 0.9632 | [0.0, 0.9310677916035094, 0.9231154731835156, 0.9742966471140867, 0.8659672807905657, 0.9548025101399095, 0.9761885400996432, 0.8359586760218701, 0.9606324687638941, 0.536304571449891, 0.0, 0.7861687315154533, 0.6648749707875672, 0.8782393648813203, 0.6028230645967004, 0.8034017821150734, 0.2798240884275797, 0.5292981433685788, 0.7976529535864979, 0.7897882016975595] | [nan, 0.9671696414372969, 0.9640722977320454, 0.9864307028133905, 0.9566418983913256, 0.9766712626661613, 0.984078186494131, 0.917516659866721, 0.9804665003157427, 0.5945275248601157, 0.0, 0.8886304108078301, 0.7671565322906836, 0.945889759711566, 0.6500072139662386, 0.9114992900830057, 0.33277893555626803, 0.5621391244374099, 0.8784050647615729, 0.9097665351872439] |
| 0.098 | 11.0 | 8657 | 0.2029 | 0.7052 | 0.8014 | 0.9640 | [0.0, 0.9288737885707921, 0.9265083379180753, 0.9747097980123621, 0.8738478537660755, 0.9558379241305062, 0.9781696214462526, 0.8391837240652649, 0.9626716931455067, 0.507780252899168, 0.0, 0.7878061172645057, 0.6769843155893536, 0.8815102118136605, 0.6056046400027283, 0.8269347543218291, 0.3132485690006253, 0.5154277002618235, 0.7927511930865472, 0.7569567975718071] | [nan, 0.9711631282238503, 0.964815472153087, 0.9853689377873769, 0.9652020663968313, 0.9754185940822899, 0.9867780413729902, 0.9206854345165238, 0.9811350296034029, 0.5495104787677182, 0.0, 0.8906350519253745, 0.7681677227989753, 0.9430888220810342, 0.65217140383783, 0.9110078090869376, 0.3914916639948702, 0.5500605696196935, 0.8924609397688331, 0.9267167202229566] |
| 0.0734 | 12.0 | 9444 | 0.2171 | 0.7126 | 0.8001 | 0.9648 | [0.0, 0.9309643707918894, 0.9277494647914695, 0.9750904306170505, 0.8777832954332417, 0.9566409475731096, 0.9780693213049435, 0.8436550838167809, 0.9635515941347027, 0.527304314900299, 0.0, 0.7909202018197202, 0.6909584834347133, 0.8836639196984207, 0.6084447805077513, 0.8287813112544289, 0.31069205419260343, 0.5403587067765045, 0.7955642033577429, 0.8211277996631356] | [nan, 0.9680901815771025, 0.9655377799057193, 0.9852963747008175, 0.9662340833391586, 0.9756774116913669, 0.9890014280908129, 0.9132224942200462, 0.9813789993824062, 0.5595195188097869, 0.0, 0.8697959746346843, 0.7887285964675745, 0.9477302580957196, 0.6557731404362482, 0.9149260048055919, 0.374058191728118, 0.5695666398450833, 0.8786809548701865, 0.8983598068927706] |
| 0.0839 | 13.0 | 10231 | 0.2606 | 0.7139 | 0.8056 | 0.9651 | [0.0, 0.932934590872574, 0.928599894716927, 0.9759876131918817, 0.8695983139625728, 0.9571779321732448, 0.979228463067019, 0.8446447574729073, 0.9630766038435438, 0.47072541703248466, 0.0, 0.7968195631480623, 0.6967972782731112, 0.8867456411969523, 0.6076684496270689, 0.8274634197517912, 0.3560522933191209, 0.5582305522639651, 0.8036840005319856, 0.8219356251968073] | [nan, 0.970161956830923, 0.9673467595439784, 0.9869340313021197, 0.9654732145230638, 0.9756083312329464, 0.9874815117348184, 0.9121141030871753, 0.9832381474966617, 0.50686275089071, 0.0, 0.8991361088135281, 0.8007954698665228, 0.9482970409127882, 0.6487891466970965, 0.9152673110528615, 0.4551538954793203, 0.5915043371384613, 0.8774612301794738, 0.914289630385453] |
| 0.0797 | 14.0 | 11018 | 0.2504 | 0.7153 | 0.8044 | 0.9655 | [0.0, 0.9353593794015038, 0.9288667661318105, 0.9762064564453578, 0.8718886319160292, 0.9576685946960725, 0.9788546612617008, 0.8472608735210976, 0.9642969355331718, 0.5361721760842425, 0.0, 0.8004189668257286, 0.696640611014977, 0.8853084044449696, 0.6099045788314064, 0.8344863725117123, 0.3254310344827586, 0.5323734971095841, 0.8050435956126539, 0.8204823185898129] | [nan, 0.9668112803123117, 0.9681903691382433, 0.9879581433175818, 0.9650443397090228, 0.9762644155033261, 0.9866578405548627, 0.9181626546987625, 0.9814820281384267, 0.5836381147080894, 0.0, 0.8844717856814631, 0.7870432789537549, 0.9470982093785038, 0.6547561898016377, 0.9131239078200087, 0.39335524206476435, 0.5610603662472479, 0.8835162920369403, 0.9243561823249014] |
| 0.0606 | 15.0 | 11805 | 0.2363 | 0.7209 | 0.8122 | 0.9661 | [0.0, 0.9354450021238048, 0.9300759788666999, 0.9766100423179009, 0.8739351769905989, 0.9580569741305669, 0.9795622398211299, 0.8496875639431477, 0.9646763306438436, 0.6043151650835981, 0.0, 0.8018012422360249, 0.7004677380666826, 0.889289794511031, 0.610767874342205, 0.8325289843013258, 0.33953698039089414, 0.5566040090865972, 0.7993623498974272, 0.8161583186067531] | [nan, 0.966786642984969, 0.965287953144928, 0.9879603875367537, 0.9664012618135025, 0.9766460508200225, 0.9889968302453108, 0.9177070583435333, 0.9825186826442273, 0.650711681743251, 0.0, 0.8897849462365591, 0.7874477551570715, 0.9497445698771078, 0.655411130494091, 0.9220183486238532, 0.42261141391471624, 0.5914689680174724, 0.8883080676075972, 0.9213864733563804] |
| 0.0532 | 16.0 | 12592 | 0.2531 | 0.7201 | 0.8074 | 0.9662 | [0.0, 0.9383203952011292, 0.9288414046194093, 0.9769141389017822, 0.8756205335515858, 0.9582358666094781, 0.979632260873732, 0.8522102747909199, 0.9655114623669192, 0.6115704722763623, 0.0, 0.8053745416448402, 0.7045095417527653, 0.8906375387790608, 0.6007837805741991, 0.8399368744136342, 0.33049747893639037, 0.5151462046865611, 0.8091001625973271, 0.8195206947575124] | [nan, 0.9678438083036752, 0.9684728717259394, 0.9879746009248427, 0.9684402878462824, 0.9766889829923047, 0.9883229174617107, 0.9215762273901809, 0.9820408723178519, 0.6655775287006565, 0.0, 0.8831104677878872, 0.7814480248078738, 0.9439503319629784, 0.6414396453351872, 0.9228033529925732, 0.40323420968259055, 0.5458428019417647, 0.8887436835685659, 0.9025173994487001] |
| 0.0862 | 17.0 | 13379 | 0.2458 | 0.7201 | 0.8087 | 0.9665 | [0.0, 0.9368370402512427, 0.9309393106006786, 0.9769932787053442, 0.8747985979138234, 0.95879411739136, 0.9800136137207117, 0.8526248910947767, 0.9651962916423883, 0.5741264468224503, 0.0, 0.8066815029500052, 0.7084107667406031, 0.8910943581653369, 0.6137487567405265, 0.843379759286757, 0.32885159559677446, 0.5243792475829478, 0.8126121336965911, 0.8231331714477782] | [nan, 0.9768073159423666, 0.9678409097683983, 0.9877789798203552, 0.9673405331004518, 0.977145821644341, 0.9876622727465598, 0.9216680266557867, 0.9832398839363699, 0.6213226822336585, 0.0, 0.8952934013417885, 0.7966158824322502, 0.946850198957944, 0.6577528276561605, 0.9188715050240279, 0.4028735171529336, 0.5553570954877843, 0.887857931114596, 0.9137413764220337] |
| 0.057 | 18.0 | 14166 | 0.2807 | 0.7169 | 0.8024 | 0.9665 | [0.0, 0.9391255338059006, 0.9316246290236013, 0.9771178536356643, 0.8736374236266327, 0.9587095139235466, 0.9802820999385629, 0.8534991833144867, 0.965491782119557, 0.5173244886677723, 0.0, 0.8079528780010615, 0.7036495460915129, 0.8919428858888571, 0.6128251272343798, 0.8423749359527112, 0.3030539267193167, 0.5387041043962495, 0.8154057368308808, 0.8249477907232359] | [nan, 0.9703254590941974, 0.967385397276143, 0.9883638482723315, 0.9660909281555922, 0.9783173801174915, 0.987878896953218, 0.9238406092751258, 0.9828454227159885, 0.5529433313441302, 0.0, 0.8918872346291701, 0.7785492786841041, 0.9525571866687186, 0.6544903660759959, 0.9202435561380515, 0.3583279897403014, 0.5679750294005819, 0.8882935470755648, 0.9144114645995461] |
| 0.27 | 19.0 | 14953 | 0.2799 | 0.7210 | 0.8089 | 0.9668 | [0.0, 0.9392661644355319, 0.932096490765189, 0.9772444850416163, 0.8748583460799624, 0.959030800837604, 0.9803660417493171, 0.8549763601588193, 0.9661359625948338, 0.5489573339508828, 0.0, 0.8082856800928263, 0.707609022556391, 0.8930480213758131, 0.6125057936760998, 0.8439663143164156, 0.3240623821315535, 0.5560068921314832, 0.813374539715939, 0.8289533147998521] | [nan, 0.9703971313191945, 0.9680462515437895, 0.9881404237858805, 0.9683475421909045, 0.9777759016962746, 0.988822374850258, 0.9210152318781449, 0.9816258632275899, 0.588252672130082, 0.0, 0.8922778237294366, 0.7930430093029527, 0.9508458460659089, 0.6517263239814098, 0.9221548711227611, 0.3959802821417121, 0.5906377936742327, 0.8980803856653308, 0.9218433516592297] |
| 0.0369 | 20.0 | 15740 | 0.2737 | 0.7224 | 0.8119 | 0.9668 | [0.0, 0.9392313580983768, 0.9322932027111482, 0.9772249946988713, 0.8749950826812657, 0.9591121585348171, 0.9803780030124933, 0.8554852055380204, 0.9661475962866876, 0.5609089467958914, 0.0, 0.8095003013989066, 0.7113799121381718, 0.8927260044840537, 0.6133653057361015, 0.8420100377966416, 0.33841086205511367, 0.553361761785151, 0.8141592920353983, 0.8270316181708587] | [nan, 0.9727824725573769, 0.9676994291705018, 0.9882968957337019, 0.9679484011220059, 0.9772700079950366, 0.9882492205666621, 0.9252107983136135, 0.9825945071781523, 0.6062795795494159, 0.0, 0.894776445179671, 0.7968855332344613, 0.9522349792248335, 0.6544510171692397, 0.9276157710790738, 0.42203029817249116, 0.5863404454740788, 0.8963814834175524, 0.9193914381006046] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AlexanderPeter/bert-finetuned-ner | c2048fe36cc7093997e70daef7478f7667562259 | 2022-06-01T19:56:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | AlexanderPeter | null | AlexanderPeter/bert-finetuned-ner | 20 | null | transformers | 8,428 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0593
- eval_precision: 0.9293
- eval_recall: 0.9485
- eval_f1: 0.9388
- eval_accuracy: 0.9858
- eval_runtime: 120.5431
- eval_samples_per_second: 26.97
- eval_steps_per_second: 3.376
- epoch: 2.0
- step: 3512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cpu
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ArthurZ/opt-13b | aabe4145fcf3def800c1e2f7b150d7b34a93ef2e | 2022-06-21T16:28:07.000Z | [
"pytorch",
"opt",
"text-generation",
"transformers"
] | text-generation | false | ArthurZ | null | ArthurZ/opt-13b | 20 | null | transformers | 8,429 | Entry not found |
wvangils/GPT2-Beatles-Lyrics-finetuned-newlyrics | 41e357049d36f29e8d3ed2d68cbe3d8840271d60 | 2022-06-17T11:21:36.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | wvangils | null | wvangils/GPT2-Beatles-Lyrics-finetuned-newlyrics | 20 | null | transformers | 8,430 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: GPT2-Beatles-Lyrics-finetuned-newlyrics
results: []
---
# GPT2-Beatles-Lyrics-finetuned-newlyrics
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [Cmotions - Beatles lyrics](https://huggingface.co/datasets/cmotions/Beatles_lyrics) dataset. It will complete an input prompt with Beatles-like text.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9259 | 1.0 | 35 | 1.6643 |
| 1.9188 | 2.0 | 70 | 1.6643 |
| 1.9725 | 3.0 | 105 | 1.6643 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mgfrantz/distilgpt2-finetuned-reddit-tifu | 73146d5c8c3195e2233491468ac3f683d3e7c78b | 2022-06-05T21:14:26.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"en",
"dataset:reddit_tifu (subset: short)",
"transformers",
"license:mit"
] | text-generation | false | mgfrantz | null | mgfrantz/distilgpt2-finetuned-reddit-tifu | 20 | null | transformers | 8,431 | ---
language:
- "en"
thumbnail: "https://styles.redditmedia.com/t5_2to41/styles/communityIcon_qedoavxzocr61.png?width=256&s=9c7c19b81474c3788279b8d6d6823e791d0524fc"
datasets:
- "reddit_tifu (subset: short)"
widget:
- text: "I told my friend"
license: mit
---
# mgfrantz/distilgpt2-finetuned-reddit-tifu
This model was trained to as practice for fine-tuning a causal language model.
There was no intended use case for this model besides having some fun seeing how different things might be screwed up.
## Data
This model was trained on "short" subset of [`reddit_tifu`](https://huggingface.co/datasets/reddit_tifu) dataset.
The data was split into 90% train and 10% validation using `dataset.train_test_split`, with a seed of 0.
To prepare the data for training, the `"tldr"` and `"documents"` fields were joined by `"\n\n"`.
When multiple items were in the `"tldr"` or `"documents"` fields, only the first item was selected for joining.
These joined documents were tokenized using the `"distilgpt2"` tokenizer.
Finally, tokenized texts were concatenated end-to-end and split into blocks of 128 tokens.
**TODO:** Add a different separation token between documents that can be used to stop generation.
## Training
This model was trained in Colab by fine-tuning [`distilgpt2`](https://huggingface.co/distilgpt2) for 174390 steps (3 epochs).
Default training arguments were used, except for `learning_rate=2e-5` and `weight_decay=0.01`.
At the conclusion of training, a training loss of 3.52 and a validation loss of 3.44 were observed. |
intogen/legal-bert-qa | 4f56b385f34937568fcb57dd8c64e0e694141972 | 2022-06-08T19:49:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | intogen | null | intogen/legal-bert-qa | 20 | null | transformers | 8,432 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: legal-bert-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-bert-qa
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5514 | 1.0 | 625 | 3.2106 |
| 1.1372 | 2.0 | 1250 | 4.5593 |
| 0.5365 | 3.0 | 1875 | 5.2974 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ehcalabres/distilgpt2-abc-irish-music-generation | 5a2a65102a661acde5a44dc6fccf88e55fdf1105 | 2022-06-08T12:10:23.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | ehcalabres | null | ehcalabres/distilgpt2-abc-irish-music-generation | 20 | null | transformers | 8,433 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-abc-irish-music-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-abc-irish-music-generation
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_mBERT_cased_fine_tuned | 87708927023840cd185a8eebc63f33e86fa4b4c0 | 2022-06-08T17:00:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ajtamayoh | null | ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_mBERT_cased_fine_tuned | 20 | null | transformers | 8,434 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_mBERT_cased_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_mBERT_cased_fine_tuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0501
- Precision: 0.8961
- Recall: 0.7009
- F1: 0.7865
- Accuracy: 0.9898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 94 | 0.0484 | 0.9002 | 0.6340 | 0.7440 | 0.9876 |
| No log | 2.0 | 188 | 0.0436 | 0.9095 | 0.6599 | 0.7649 | 0.9887 |
| No log | 3.0 | 282 | 0.0462 | 0.8545 | 0.7043 | 0.7722 | 0.9883 |
| No log | 4.0 | 376 | 0.0456 | 0.9058 | 0.6761 | 0.7743 | 0.9894 |
| No log | 5.0 | 470 | 0.0447 | 0.9194 | 0.6836 | 0.7841 | 0.9900 |
| 0.0426 | 6.0 | 564 | 0.0480 | 0.8917 | 0.7026 | 0.7859 | 0.9897 |
| 0.0426 | 7.0 | 658 | 0.0501 | 0.8961 | 0.7009 | 0.7865 | 0.9898 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
G-WOO/model_150mil-CodeBERTa-small-v1 | 8a760eefef9204c948b8d042c0b8e9c5063ee3d2 | 2022-06-09T03:25:00.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | G-WOO | null | G-WOO/model_150mil-CodeBERTa-small-v1 | 20 | null | transformers | 8,435 | Entry not found |
ahmeddbahaa/t5-arabic-base-finetuned-xlsum-ar | 89bd21d56d67fbe3e2b8c47d8c279f743695b43c | 2022-06-11T19:13:08.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"ar",
"abstractive summarization",
"xlsum",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/t5-arabic-base-finetuned-xlsum-ar | 20 | null | transformers | 8,436 | ---
license: apache-2.0
tags:
- summarization
- t5
- ar
- abstractive summarization
- xlsum
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: t5-arabic-base-finetuned-xlsum-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-arabic-base-finetuned-xlsum-ar
This model is a fine-tuned version of [bakrianoo/t5-arabic-base](https://huggingface.co/bakrianoo/t5-arabic-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0328
- Rouge-1: 23.72
- Rouge-2: 10.95
- Rouge-l: 21.59
- Gen Len: 19.0
- Bertscore: 71.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
enoriega/rule_learning_margin_1mm_spanpred | cf121f335471aaf3015e0a722850b7b6b87e4843 | 2022-06-15T00:55:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"dataset:enoriega/odinsynth_dataset",
"transformers",
"generated_from_trainer",
"model-index"
] | null | false | enoriega | null | enoriega/rule_learning_margin_1mm_spanpred | 20 | null | transformers | 8,437 | ---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm_spanpred
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm_spanpred
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3250
- Margin Accuracy: 0.8518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.5448 | 0.16 | 20 | 0.5229 | 0.7717 |
| 0.4571 | 0.32 | 40 | 0.4292 | 0.8109 |
| 0.4296 | 0.48 | 60 | 0.4009 | 0.8193 |
| 0.4028 | 0.64 | 80 | 0.3855 | 0.8296 |
| 0.3878 | 0.8 | 100 | 0.3757 | 0.8334 |
| 0.3831 | 0.96 | 120 | 0.3643 | 0.8367 |
| 0.3591 | 1.12 | 140 | 0.3582 | 0.8393 |
| 0.3598 | 1.28 | 160 | 0.3533 | 0.8401 |
| 0.3635 | 1.44 | 180 | 0.3442 | 0.8427 |
| 0.3478 | 1.6 | 200 | 0.3406 | 0.8472 |
| 0.342 | 1.76 | 220 | 0.3352 | 0.8479 |
| 0.3327 | 1.92 | 240 | 0.3352 | 0.8486 |
| 0.3487 | 2.08 | 260 | 0.3293 | 0.8487 |
| 0.3387 | 2.24 | 280 | 0.3298 | 0.8496 |
| 0.3457 | 2.4 | 300 | 0.3279 | 0.8505 |
| 0.3483 | 2.56 | 320 | 0.3286 | 0.8510 |
| 0.3421 | 2.72 | 340 | 0.3245 | 0.8517 |
| 0.3332 | 2.88 | 360 | 0.3252 | 0.8517 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
QCRI/bert-base-cased-pos | be4f4e2ad204c84d793fc236381c7a15021ce26c | 2022-06-13T05:50:38.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | token-classification | false | QCRI | null | QCRI/bert-base-cased-pos | 20 | null | transformers | 8,438 | ---
license: cc-by-nc-4.0
---
|
ml6team/keyphrase-extraction-kbir-semeval2017 | 764f17880ce95ebbe7edbb624d5ef0c0cbae38c5 | 2022-06-16T18:29:41.000Z | [
"pytorch",
"roberta",
"token-classification",
"en",
"dataset:midas/semeval2017",
"arxiv:2112.08547",
"arxiv:1704.02853",
"transformers",
"keyphrase-extraction",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | ml6team | null | ml6team/keyphrase-extraction-kbir-semeval2017 | 20 | null | transformers | 8,439 | ---
language: en
license: mit
tags:
- keyphrase-extraction
datasets:
- midas/semeval2017
metrics:
- seqeval
widget:
- text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document.
Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading
it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail
and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents,
this process can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical
and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency,
occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies
and context of words in a text."
example_title: "Example 1"
- text: "In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (up to 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (up to 4.33 points inF1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition(NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks."
example_title: "Example 2"
model-index:
- name: ml6team/keyphrase-extraction-kbir-semeval2017
results:
- task:
type: keyphrase-extraction
name: Keyphrase Extraction
dataset:
type: midas/semeval2017
name: semeval2017
metrics:
- type: F1 (Seqeval)
value: 0.000
name: F1 (Seqeval)
- type: F1@M
value: 0.401
name: F1@M
---
# 🔑 Keyphrase Extraction Model: KBIR-semeval2017
Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳.
Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text.
## 📓 Model Description
This model uses [KBIR](https://huggingface.co/bloomberg/KBIR) as its base model and fine-tunes it on the [semeval2017 dataset](https://huggingface.co/datasets/midas/semeval2017). KBIR or Keyphrase Boundary Infilling with Replacement is a pre-trained model which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC).
You can find more information about the architecture in this [paper](https://arxiv.org/abs/2112.08547).
Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not.
| Label | Description |
| ----- | ------------------------------- |
| B-KEY | At the beginning of a keyphrase |
| I-KEY | Inside a keyphrase |
| O | Outside a keyphrase |
## ✋ Intended Uses & Limitations
### 🛑 Limitations
* This keyphrase extraction model is very domain-specific and will perform very well on abstracts of scientific articles. It's not recommended to use this model for other domains, but you are free to test it out.
* Limited amount of predicted keyphrases.
* Only works for English documents.
* For a custom model, please consult the [training notebook]() for more information.
### ❓ How To Use
```python
from transformers import (
TokenClassificationPipeline,
AutoModelForTokenClassification,
AutoTokenizer,
)
from transformers.pipelines import AggregationStrategy
import numpy as np
# Define keyphrase extraction pipeline
class KeyphraseExtractionPipeline(TokenClassificationPipeline):
def __init__(self, model, *args, **kwargs):
super().__init__(
model=AutoModelForTokenClassification.from_pretrained(model),
tokenizer=AutoTokenizer.from_pretrained(model),
*args,
**kwargs
)
def postprocess(self, model_outputs):
results = super().postprocess(
model_outputs=model_outputs,
aggregation_strategy=AggregationStrategy.SIMPLE,
)
return np.unique([result.get("word").strip() for result in results])
```
```python
# Load pipeline
model_name = "ml6team/keyphrase-extraction-kbir-semeval2017"
extractor = KeyphraseExtractionPipeline(model=model_name)
```
```python
# Inference
text = """
Keyphrase extraction is a technique in text analysis where you extract the
important keyphrases from a document. Thanks to these keyphrases humans can
understand the content of a text very quickly and easily without reading it
completely. Keyphrase extraction was first done primarily by human annotators,
who read the text in detail and then wrote down the most important keyphrases.
The disadvantage is that if you work with a lot of documents, this process
can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine
learning methods, that use statistical and linguistic features, are widely used
for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods.
Classical methods look at the frequency, occurrence and order of words
in the text, whereas these neural approaches can capture long-term
semantic dependencies and context of words in a text.
""".replace("\n", " ")
keyphrases = extractor(text)
print(keyphrases)
```
```
# Output
['artificial intelligence']
```
## 📚 Training Dataset
[Semeval2017](https://huggingface.co/datasets/midas/semeval2017) is a keyphrase extraction/generation dataset consisting of 500 English scientific paper abstracts from the ScienceDirect open access publications. from NY Times and 10K from JPTimes and annotated by professional indexers or editors. The selected articles were evenly distributed among the domains of Computer Science, Material Sciences and Physics. Each paper has a set of keyphrases annotated by student volunteers. Each paper was double-annotated, where the second annotation was done by an expert annotator.
You can find more information in the [paper](https://arxiv.org/abs/1704.02853).
## 👷♂️ Training procedure
For more in detail information, you can take a look at the [training notebook]().
### Training parameters
| Parameter | Value |
| --------- | ------|
| Learning Rate | 1e-4 |
| Epochs | 50 |
| Early Stopping Patience | 3 |
### Preprocessing
The documents in the dataset are already preprocessed into list of words with the corresponding labels. The only thing that must be done is tokenization and the realignment of the labels so that they correspond with the right subword tokens.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# Labels
label_list = ["B", "I", "O"]
lbl2idx = {"B": 0, "I": 1, "O": 2}
idx2label = {0: "B", 1: "I", 2: "O"}
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR")
max_length = 512
# Dataset parameters
dataset_full_name = "midas/semeval2017"
dataset_subset = "raw"
dataset_document_column = "document"
dataset_biotags_column = "doc_bio_tags"
def preprocess_fuction(all_samples_per_split):
tokenized_samples = tokenizer.batch_encode_plus(
all_samples_per_split[dataset_document_column],
padding="max_length",
truncation=True,
is_split_into_words=True,
max_length=max_length,
)
total_adjusted_labels = []
for k in range(0, len(tokenized_samples["input_ids"])):
prev_wid = -1
word_ids_list = tokenized_samples.word_ids(batch_index=k)
existing_label_ids = all_samples_per_split[dataset_biotags_column][k]
i = -1
adjusted_label_ids = []
for wid in word_ids_list:
if wid is None:
adjusted_label_ids.append(lbl2idx["O"])
elif wid != prev_wid:
i = i + 1
adjusted_label_ids.append(lbl2idx[existing_label_ids[i]])
prev_wid = wid
else:
adjusted_label_ids.append(
lbl2idx[
f"{'I' if existing_label_ids[i] == 'B' else existing_label_ids[i]}"
]
)
total_adjusted_labels.append(adjusted_label_ids)
tokenized_samples["labels"] = total_adjusted_labels
return tokenized_samples
# Load dataset
dataset = load_dataset(dataset_full_name, dataset_subset)
# Preprocess dataset
tokenized_dataset = dataset.map(preprocess_fuction, batched=True)
```
### Postprocessing (Without Pipeline Function)
If you do not use the pipeline function, you must filter out the B and I labeled tokens. Each B and I will then be merged into a keyphrase. Finally, you need to strip the keyphrases to make sure all unnecessary spaces have been removed.
```python
# Define post_process functions
def concat_tokens_by_tag(keyphrases):
keyphrase_tokens = []
for id, label in keyphrases:
if label == "B":
keyphrase_tokens.append([id])
elif label == "I":
if len(keyphrase_tokens) > 0:
keyphrase_tokens[len(keyphrase_tokens) - 1].append(id)
return keyphrase_tokens
def extract_keyphrases(example, predictions, tokenizer, index=0):
keyphrases_list = [
(id, idx2label[label])
for id, label in zip(
np.array(example["input_ids"]).squeeze().tolist(), predictions[index]
)
if idx2label[label] in ["B", "I"]
]
processed_keyphrases = concat_tokens_by_tag(keyphrases_list)
extracted_kps = tokenizer.batch_decode(
processed_keyphrases,
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
return np.unique([kp.strip() for kp in extracted_kps])
```
## 📝 Evaluation Results
Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases.
The model achieves the following results on the Semeval2017 test set:
| Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M |
|:---------------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|
| Semeval2017 Test Set | 0.41 | 0.20 | 0.25 | 0.37 | 0.34 | 0.34 | 0.36 | 0.50 | 0.40 |
For more information on the evaluation process, you can take a look at the keyphrase extraction [evaluation notebook]().
## 🚨 Issues
Please feel free to start discussions in the Community Tab. |
Shikenrua/distilbert-base-uncased-finetuned-emotion | b102f0080eafc39dc89f3632b6fb5b37222c6ae9 | 2022-06-29T04:46:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Shikenrua | null | Shikenrua/distilbert-base-uncased-finetuned-emotion | 20 | null | transformers | 8,440 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
RayMelius/bert-finetuned-ner | c3b99a3542b68aeae1c5403128748d25fb7c28d7 | 2022-06-17T16:06:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | RayMelius | null | RayMelius/bert-finetuned-ner | 20 | null | transformers | 8,441 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-base-japanese-unidic-ud-head | 3169ed622fe3b4d0e89b4969c9048e02c0154b9a | 2022-07-20T03:51:55.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | question-answering | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-japanese-unidic-ud-head | 20 | null | transformers | 8,442 | ---
language:
- "ja"
tags:
- "japanese"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# deberta-base-japanese-unidic-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on 青空文庫 for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-base-japanese-unidic](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-unidic) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-unidic-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-unidic-ud-head")
question="国語"
context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
inputs=tokenizer(question,context,return_tensors="pt")
outputs=model(**inputs)
start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
print(tokenizer.convert_ids_to_tokens(inputs["input_ids"][0,start:end+1]))
```
or
```py
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
class TaggerPipeline(TokenClassificationPipeline):
def __call__(self,text):
d=super().__call__(text)
if len(d)>0 and ("start" not in d[0] or d[0]["start"]==None):
import tokenizations
v=[x["word"].replace(" ","") for x in d]
a2b,b2a=tokenizations.get_alignments(v,text)
for i,t in enumerate(a2b):
s,e=(0,0) if t==[] else (t[0],t[-1]+1)
if v[i].startswith(self.tokenizer.unk_token):
s=([[-1]]+[x for x in a2b[0:i] if x>[]])[-1][-1]+1
if v[i].endswith(self.tokenizer.unk_token):
e=([x for x in a2b[i+1:] if x>[]]+[[len(text)]])[0][0]
d[i]["start"],d[i]["end"]=s,e
return d
class TransformersSlowUD(object):
def __init__(self,bert):
import os
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TaggerPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TaggerPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersSlowUD("KoichiYasuoka/deberta-base-japanese-unidic-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
[fugashi](https://pypi.org/project/fugashi) [unidic-lite](https://pypi.org/project/unidic-lite) [pytokenizations](https://pypi.org/project/pytokenizations) and [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/) required.
|
AlekseyKorshuk/results-gpt-j-lit-erotic | 2e668432c90f9da811b6107777860b6e10d7d5eb | 2022-06-18T13:09:31.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | AlekseyKorshuk | null | AlekseyKorshuk/results-gpt-j-lit-erotic | 20 | null | transformers | 8,443 | Entry not found |
Chemsseddine/bert2gpt2SUMM-finetuned-mlsum-finetuned-mlorange_sum | 8b2ad42c31f5ae023dac3d304dcc92b7bc7eb857 | 2022-06-30T18:42:50.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:orange_sum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chemsseddine | null | Chemsseddine/bert2gpt2SUMM-finetuned-mlsum-finetuned-mlorange_sum | 20 | null | transformers | 8,444 | ---
tags:
- generated_from_trainer
datasets:
- orange_sum
metrics:
- rouge
model-index:
- name: bert2gpt2SUMM-finetuned-mlsum-finetuned-mlorange_sum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: orange_sum
type: orange_sum
args: abstract
metrics:
- name: Rouge1
type: rouge
value: 24.949
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<img src="https://huggingface.co/Chemsseddine/bert2gpt2_med_ml_orange_summ-finetuned_med_sum_new-finetuned_med_sum_new/resolve/main/logobert2gpt2.png" alt="Map of positive probabilities per country." width="200"/>
# bert2gpt2SUMM-finetuned-mlsum-finetuned-mlorange_sum
This model is a fine-tuned version of [Chemsseddine/bert2gpt2SUMM-finetuned-mlsum](https://huggingface.co/Chemsseddine/bert2gpt2SUMM-finetuned-mlsum) on the orange_sum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1773
- Rouge1: 24.949
- Rouge2: 7.851
- Rougel: 18.1575
- Rougelsum: 18.4114
- Gen Len: 39.7947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 3.5484 | 1.0 | 1338 | 3.1773 | 24.949 | 7.851 | 18.1575 | 18.4114 | 39.7947 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
KoichiYasuoka/roberta-large-japanese-aozora-ud-head | e716b16b458b5e9d604ee940bab030fbb8248fa1 | 2022-07-20T03:52:24.000Z | [
"pytorch",
"roberta",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | question-answering | false | KoichiYasuoka | null | KoichiYasuoka/roberta-large-japanese-aozora-ud-head | 20 | null | transformers | 8,445 | ---
language:
- "ja"
tags:
- "japanese"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# roberta-large-japanese-aozora-ud-head
## Model Description
This is a RoBERTa model pretrained on 青空文庫 for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [roberta-large-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-char) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model)
print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/roberta-large-japanese-aozora-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
|
romainlhardy/roberta-large-finetuned-ner | 131ae59ac84058dabcaa3dbe2b4b4ccde28315d6 | 2022-06-26T09:20:58.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | romainlhardy | null | romainlhardy/roberta-large-finetuned-ner | 20 | null | transformers | 8,446 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9476811355009077
- name: Recall
type: recall
value: 0.9663412992258499
- name: F1
type: f1
value: 0.9569202566452795
- name: Accuracy
type: accuracy
value: 0.990656929827253
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-ner
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0495
- Precision: 0.9477
- Recall: 0.9663
- F1: 0.9569
- Accuracy: 0.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.078 | 1.0 | 1756 | 0.0577 | 0.9246 | 0.9536 | 0.9389 | 0.9865 |
| 0.0382 | 2.0 | 3512 | 0.0528 | 0.9414 | 0.9620 | 0.9516 | 0.9890 |
| 0.021 | 3.0 | 5268 | 0.0495 | 0.9477 | 0.9663 | 0.9569 | 0.9907 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jacquelinehe/anonymized-model | 12d2159f1ce59fabca83b7721670c721c96c45c1 | 2022-06-27T05:58:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | jacquelinehe | null | jacquelinehe/anonymized-model | 20 | null | transformers | 8,447 | ---
license: apache-2.0
---
|
Rahulrr/language_model_en_de | a5d0d22f266e314bc737a94d496d8281a9045a34 | 2022-06-27T10:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Rahulrr | null | Rahulrr/language_model_en_de | 20 | null | transformers | 8,448 | ---
language:
- en
- de
tags:
- translation
license: apache-2.0
---
### en-de
* source group: English
* target group: German
* OPUS readme: [eng-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-deu/README.md)
* model: transformer-big
* source language(s): eng
* target language(s): deu
* raw source language(s): eng
* raw target language(s): deu
* model: transformer-big
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807+bt-2021-12-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.zip)
* test set translations: [opusTCv20210807+bt-2021-12-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.test.txt)
* test set scores: [opusTCv20210807+bt-2021-12-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newssyscomb2009.eng-deu | 24.3 | 0.5462 | 502 | 11271 | 0.993 |
| news-test2008.eng-deu | 24.7 | 0.5412 | 2051 | 47427 | 1.000 |
| newstest2009.eng-deu | 23.6 | 0.5385 | 2525 | 62816 | 0.999 |
| newstest2010.eng-deu | 26.9 | 0.5589 | 2489 | 61511 | 0.966 |
| newstest2011.eng-deu | 24.1 | 0.5364 | 3003 | 72981 | 0.990 |
| newstest2012.eng-deu | 24.6 | 0.5375 | 3003 | 72886 | 0.972 |
| newstest2013.eng-deu | 28.3 | 0.5636 | 3000 | 63737 | 0.988 |
| newstest2014-deen.eng-deu | 30.9 | 0.6084 | 3003 | 62964 | 1.000 |
| newstest2015-ende.eng-deu | 33.2 | 0.6106 | 2169 | 44260 | 1.000 |
| newstest2016-ende.eng-deu | 39.8 | 0.6595 | 2999 | 62670 | 0.993 |
| newstest2017-ende.eng-deu | 32.0 | 0.6047 | 3004 | 61291 | 1.000 |
| newstest2018-ende.eng-deu | 48.8 | 0.7146 | 2998 | 64276 | 1.000 |
| newstest2019-ende.eng-deu | 45.0 | 0.6821 | 1997 | 48969 | 0.995 |
| Tatoeba-test-v2021-08-07.eng-deu | 43.7 | 0.6442 | 10000 | 85728 | 1.000 |
### System Info:
- hf_name: en-de
- source_languages: eng
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'de']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('German', {'deu'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-deu
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.test.txt
- src_alpha3: eng
- tgt_alpha3: deu
- chrF2_score: 0.6442
- bleu: 43.7
- src_name: English
- tgt_name: German
- train_date: 2021-12-08 00:00:00
- src_alpha2: en
- tgt_alpha2: de
- prefer_old: False
- short_pair: en-de
- helsinki_git_sha: c4e978d8de47875b482653b423dcfe968979d7d5
- transformers_git_sha: 56b83cf049823ed074a655eceb28f31e2077c6eb
- port_machine: LAPIN4GLQ2G3
- port_time: 2022-06-27-16:10 |
barthfab/drugprot | 25f9807992a611b6602a27aee4df4338cf16cb53 | 2022-07-19T14:11:06.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | barthfab | null | barthfab/drugprot | 20 | null | transformers | 8,449 | Entry not found |
zunicd/finetuning-sentiment-model-3000-samples | 27080e6c6c19f8b79e97452788b6cace0452e421 | 2022-06-28T18:12:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | zunicd | null | zunicd/finetuning-sentiment-model-3000-samples | 20 | null | transformers | 8,450 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8741721854304636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3349
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Neha2608/distilbert-base-uncased-finetuned-emotion | 5b459baf80167c2b604506be270ae357ef779ba8 | 2022-07-30T09:43:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Neha2608 | null | Neha2608/distilbert-base-uncased-finetuned-emotion | 20 | null | transformers | 8,451 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9184567794520658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy is: 0.9185
- F1: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy is | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:------:|
| 0.8026 | 1.0 | 250 | 0.3114 | 0.905 | 0.9035 |
| 0.2409 | 2.0 | 500 | 0.2207 | 0.9185 | 0.9185 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
FabianWillner/bert-base-uncased-finetuned-squad | 8328dcbb5aad945ad4fc0557dc83169e94e11ec1 | 2022-06-29T14:46:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | FabianWillner | null | FabianWillner/bert-base-uncased-finetuned-squad | 20 | null | transformers | 8,452 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0626 | 1.0 | 5533 | 1.0308 |
| 0.8157 | 2.0 | 11066 | 1.0106 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
TheDiamondKing/Discord-Message-Small | f62af9cbf3df78e4873272eb27ccb45e149bd98b | 2022-06-29T21:06:10.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | TheDiamondKing | null | TheDiamondKing/Discord-Message-Small | 20 | null | transformers | 8,453 | ---
license: mit
---
Simple model trained with 2790 Discord messages
( Might have some NSFW responses )
|
clevrly/distilbert-base-uncased-finetuned-hotpot_qa | 25947b9cc5b2ac6ffc8eec4313283e9c7852c4eb | 2022-06-30T18:12:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | clevrly | null | clevrly/distilbert-base-uncased-finetuned-hotpot_qa | 20 | null | transformers | 8,454 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-hotpot_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-hotpot_qa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1396 | 1.0 | 2572 | 1.0405 |
| 0.8396 | 2.0 | 5144 | 0.9299 |
| 0.6253 | 3.0 | 7716 | 1.0625 |
| 0.4584 | 4.0 | 10288 | 1.1290 |
| 0.3432 | 5.0 | 12860 | 1.2565 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ubikpt/t5-small-finetuned-cnn-v2 | a0be88a1dd4cb4e76ae49337445f0c08e8a51493 | 2022-07-01T03:15:22.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | ubikpt | null | ubikpt/t5-small-finetuned-cnn-v2 | 20 | null | transformers | 8,455 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-v2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 35.154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-v2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5474
- Rouge1: 35.154
- Rouge2: 18.683
- Rougel: 30.8481
- Rougelsum: 32.9638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.8823 | 1.0 | 35890 | 1.5878 | 34.9676 | 18.4927 | 30.6753 | 32.7702 |
| 1.7871 | 2.0 | 71780 | 1.5709 | 34.9205 | 18.5556 | 30.6514 | 32.745 |
| 1.7507 | 3.0 | 107670 | 1.5586 | 34.9825 | 18.4964 | 30.6724 | 32.7644 |
| 1.7253 | 4.0 | 143560 | 1.5584 | 35.074 | 18.6171 | 30.8007 | 32.9132 |
| 1.705 | 5.0 | 179450 | 1.5528 | 35.023 | 18.5787 | 30.7014 | 32.8396 |
| 1.6894 | 6.0 | 215340 | 1.5518 | 35.0583 | 18.6754 | 30.791 | 32.8814 |
| 1.6776 | 7.0 | 251230 | 1.5468 | 35.2236 | 18.6812 | 30.8944 | 33.0362 |
| 1.6687 | 8.0 | 287120 | 1.5474 | 35.154 | 18.683 | 30.8481 | 32.9638 |
### Framework versions
- Transformers 4.14.0
- Pytorch 1.5.0
- Datasets 2.3.2
- Tokenizers 0.10.3
|
sudo-s/new_exper3 | 1e38cb4be2501c0c085866362bc5104ab0cc8efa | 2022-06-30T21:19:12.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/new_exper3 | 20 | null | transformers | 8,456 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: new_exper3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_exper3
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3000
- Accuracy: 0.9298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Apex, opt level O1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.093 | 0.16 | 100 | 4.1045 | 0.1885 |
| 3.5057 | 0.31 | 200 | 3.4448 | 0.3231 |
| 2.9116 | 0.47 | 300 | 2.9483 | 0.4537 |
| 2.561 | 0.63 | 400 | 2.5700 | 0.5258 |
| 2.1611 | 0.78 | 500 | 2.1721 | 0.6145 |
| 1.715 | 0.94 | 600 | 1.8255 | 0.6407 |
| 1.2752 | 1.1 | 700 | 1.5340 | 0.7051 |
| 1.2487 | 1.25 | 800 | 1.3533 | 0.7201 |
| 1.0333 | 1.41 | 900 | 1.1474 | 0.7826 |
| 0.8856 | 1.56 | 1000 | 1.0914 | 0.7645 |
| 0.7512 | 1.72 | 1100 | 0.8893 | 0.8119 |
| 0.747 | 1.88 | 1200 | 0.8370 | 0.8304 |
| 0.5082 | 2.03 | 1300 | 0.7131 | 0.8566 |
| 0.4449 | 2.19 | 1400 | 0.6573 | 0.8547 |
| 0.2912 | 2.35 | 1500 | 0.6184 | 0.8597 |
| 0.285 | 2.5 | 1600 | 0.5974 | 0.8570 |
| 0.2267 | 2.66 | 1700 | 0.5621 | 0.8647 |
| 0.2553 | 2.82 | 1800 | 0.5044 | 0.8816 |
| 0.2029 | 2.97 | 1900 | 0.4342 | 0.8955 |
| 0.1763 | 3.13 | 2000 | 0.4487 | 0.8905 |
| 0.1418 | 3.29 | 2100 | 0.4173 | 0.9005 |
| 0.0563 | 3.44 | 2200 | 0.3870 | 0.9048 |
| 0.0579 | 3.6 | 2300 | 0.3849 | 0.9036 |
| 0.166 | 3.76 | 2400 | 0.3933 | 0.9025 |
| 0.11 | 3.91 | 2500 | 0.3918 | 0.9056 |
| 0.0356 | 4.07 | 2600 | 0.3298 | 0.9202 |
| 0.0513 | 4.23 | 2700 | 0.3371 | 0.9210 |
| 0.0762 | 4.38 | 2800 | 0.3253 | 0.9225 |
| 0.018 | 4.54 | 2900 | 0.3467 | 0.9148 |
| 0.0263 | 4.69 | 3000 | 0.3544 | 0.9144 |
| 0.0205 | 4.85 | 3100 | 0.3340 | 0.9221 |
| 0.0237 | 5.01 | 3200 | 0.3353 | 0.9144 |
| 0.013 | 5.16 | 3300 | 0.3218 | 0.9229 |
| 0.0116 | 5.32 | 3400 | 0.3088 | 0.9291 |
| 0.0119 | 5.48 | 3500 | 0.3047 | 0.9279 |
| 0.0098 | 5.63 | 3600 | 0.3063 | 0.9283 |
| 0.0086 | 5.79 | 3700 | 0.3074 | 0.9268 |
| 0.0081 | 5.95 | 3800 | 0.3220 | 0.9237 |
| 0.0078 | 6.1 | 3900 | 0.3064 | 0.9268 |
| 0.0074 | 6.26 | 4000 | 0.3062 | 0.9279 |
| 0.0068 | 6.42 | 4100 | 0.3051 | 0.9291 |
| 0.006 | 6.57 | 4200 | 0.3000 | 0.9298 |
| 0.0075 | 6.73 | 4300 | 0.3010 | 0.9310 |
| 0.0057 | 6.89 | 4400 | 0.3037 | 0.9298 |
| 0.0058 | 7.04 | 4500 | 0.3071 | 0.9279 |
| 0.0075 | 7.2 | 4600 | 0.3075 | 0.9283 |
| 0.0066 | 7.36 | 4700 | 0.3077 | 0.9295 |
| 0.0056 | 7.51 | 4800 | 0.3084 | 0.9295 |
| 0.0053 | 7.67 | 4900 | 0.3064 | 0.9310 |
| 0.0057 | 7.82 | 5000 | 0.3068 | 0.9318 |
| 0.0055 | 7.98 | 5100 | 0.3068 | 0.9318 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.5.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Tritkoman/EN-ROM | 7afd0b47a0d6951f6888edaa5cb0d9e2ef3d7c14 | 2022-07-01T06:07:37.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"en",
"hi",
"dataset:Tritkoman/autotrain-data-rusynpann",
"transformers",
"autotrain",
"translation",
"co2_eq_emissions",
"autotrain_compatible"
] | translation | false | Tritkoman | null | Tritkoman/EN-ROM | 20 | null | transformers | 8,457 | ---
tags:
- autotrain
- translation
language:
- en
- hi
datasets:
- Tritkoman/autotrain-data-rusynpann
co2_eq_emissions: 30.068537136776726
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1066237031
- CO2 Emissions (in grams): 30.068537136776726
## Validation Metrics
- Loss: 2.461327075958252
- SacreBLEU: 13.8452
- Gen len: 13.2313 |
yuningm/bart-large-citesum-title | c9290ab76e7fcfbd2a87a1ac6faaddcfc25ae4fd | 2022-07-08T20:53:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:yuningm/citesum",
"arxiv:2205.06207",
"transformers",
"summarization",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | summarization | false | yuningm | null | yuningm/bart-large-citesum-title | 20 | 1 | transformers | 8,458 | ---
license: cc-by-nc-4.0
language: en
tags:
- summarization
datasets:
- yuningm/citesum
widget:
- text: "Abstract-This paper presents a control strategy that allows a group of mobile robots to position themselves to optimize the measurement of sensory information in the environment. The robots use sensed information to estimate a function indicating the relative importance of different areas in the environment. Their estimate is then used to drive the network to a desirable placement configuration using a computationally simple decentralized control law. We formulate the problem, provide a practical control solution, and present the results of numerical simulations. We then discuss experiments carried out on a swarm of mobile robots."
example_title: "Networked Robots"
- text: "Abstract. In this paper, a Bayesian method for face recognition is proposed based on Markov Random Fields (MRF) modeling. Constraints on image features as well as contextual relationships between them are explored and encoded into a cost function derived based on a statistical model of MRF. Gabor wavelet coefficients are used as the base features, and relationships between Gabor features at different pixel locations are used to provide higher order contextual constraints. The posterior probability of matching configuration is derived based on MRF modeling. Local search and discriminate analysis are used to evaluate local matches, and a contextual constraint is applied to evaluate mutual matches between local matches. The proposed MRF method provides a new perspective for modeling the face recognition problem. Experiments demonstrate promising results."
example_title: "Bayesian Face Recognition"
- text: "Abstract One of the most relevant applications of digital image forensics is to accurately identify the device used for taking a given set of images, a problem called source identification. This paper studies recent developments in the field and proposes the mixture of two techniques (Sensor Imperfections and Wavelet Transforms) to get better source identification of images generated with mobile devices. Our results show that Sensor Imperfections and Wavelet Transforms can jointly serve as good forensic features to help trace the source camera of images produced by mobile phones. Furthermore, the model proposed here can also determine with high precision both the brand and model of the device."
example_title: "Source identification for mobile devices"
---
# Bart-Large CiteSum (Titles)
This is facebook/bart-large fine-tuned on CiteSum. The "src" column is the input and the "title" column is the target summarization.
## Authors
### Yuning Mao, Ming Zhong, Jiawei Han
#### University of Illinois Urbana-Champaign
{yuningm2, mingz5, hanj}@illinois.edu
## Results
```
{
"epoch": 6.78,
"eval_gen_len": 17.1775,
"eval_loss": 1.9626615047454834,
"eval_rouge1": 51.4834,
"eval_rouge2": 29.9178,
"eval_rougeL": 45.4882,
"eval_rougeLsum": 45.517,
"eval_runtime": 351.9638,
"eval_samples": 4681,
"eval_samples_per_second": 13.3,
"eval_steps_per_second": 0.21,
"predict_gen_len": 17.1032,
"predict_loss": 1.9391602277755737,
"predict_rouge1": 52.0304,
"predict_rouge2": 30.1511,
"predict_rougeL": 45.9902,
"predict_rougeLsum": 46.0068,
"predict_runtime": 363.9691,
"predict_samples": 4882,
"predict_samples_per_second": 13.413,
"predict_steps_per_second": 0.212,
"train_loss": 1.0821667497907366,
"train_runtime": 24401.3762,
"train_samples": 82653,
"train_samples_per_second": 65.57,
"train_steps_per_second": 8.196
}
```
## Dataset Description
CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation.
CiteSum contains TLDR summaries for scientific papers from their citation texts without human annotation, making it around 30 times larger than the previous human-curated dataset SciTLDR.
## Homepage
https://github.com/morningmoni/CiteSum
## Paper
https://arxiv.org/abs/2205.06207
## Dataset on Hub
https://huggingface.co/datasets/nbroad/citesum
## How to use model
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="yuningm/bart-large-citesum-title")
article = ''' We describe a convolutional neural network that learns\
feature representations for short textual posts using hashtags as a\
supervised signal. The proposed approach is trained on up to 5.5 \
billion words predicting 100,000 possible hashtags. As well as strong\
performance on the hashtag prediction task itself, we show that its \
learned representation of text (ignoring the hashtag labels) is useful\
for other tasks as well. To that end, we present results on a document\
recommendation task, where it also outperforms a number of baselines.
'''
summarizer(article)
# [{'summary_text': 'Learning Text Representations from Hashtags using Convolutional Neural Networks'}]
```
|
akhisreelibra/bert-finetuned-ner | 14a89897eb522db5f76b5f02685ace27687b3052 | 2022-07-05T13:10:05.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | akhisreelibra | null | akhisreelibra/bert-finetuned-ner | 20 | null | transformers | 8,459 | |
KoichiYasuoka/deberta-large-japanese-wikipedia | ddda036dd277f3e2cdbf8ee888761d83ed28694a | 2022-07-23T14:44:05.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/deberta-large-japanese-wikipedia | 20 | null | transformers | 8,460 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
- "wikipedia"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# deberta-large-japanese-wikipedia
## Model Description
This is a DeBERTa(V2) model pre-trained on Japanese Wikipedia and 青空文庫 texts. You can fine-tune `deberta-large-japanese-wikipedia` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-wikipedia-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-wikipedia-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-wikipedia")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-large-japanese-wikipedia")
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
|
samroni/puisi_model_gpt2_small | 9ba9155f8b5978d8574d2e85cd584f4b081c2a8a | 2022-07-26T16:42:44.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | samroni | null | samroni/puisi_model_gpt2_small | 20 | null | transformers | 8,461 | Entry not found |
saadob12/t5_C2T_autochart | 3cd8aec43f3287c21164bccc9fafb71682e154b7 | 2022-07-19T13:03:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | saadob12 | null | saadob12/t5_C2T_autochart | 20 | null | transformers | 8,462 | # Training Data
**Autochart:** Zhu, J., Ran, J., Lee, R. K. W., Choo, K., & Li, Z. (2021). AutoChart: A Dataset for Chart-to-Text Generation Task. arXiv preprint arXiv:2108.06897.
**Gitlab Link for the data**: https://gitlab.com/bottle_shop/snlg/chart/autochart
Train split for this model: Train 8000, Validation 1297, Test 1296
# Example use:
Append ```C2T: ``` before every input to the model
```
tokenizer = AutoTokenizer.from_pretrained(saadob12/t5_C2T_autochart)
model = AutoModelForSeq2SeqLM.from_pretrained(saadob12/t5_C2T_autochart)
data = 'Trade statistics of Qatar with developing economies in North Africa bar_chart Year-Trade with economies of Middle East & North Africa(%)(Merchandise exports,Merchandise imports) x-y1-y2 values 2000 0.591869968616745 3.59339030672154 , 2001 0.53415012207203 3.25371165779341 , 2002 3.07769793440318 1.672796364224 , 2003 0.6932513078579471 1.62522475477827 , 2004 1.17635914189321 1.80540331396412'
prefix = 'C2T: '
tokens = tokenizer.encode(prefix + data, truncation=True, padding='max_length', return_tensors='pt')
generated = model.generate(tokens, num_beams=4, max_length=256)
tgt_text = tokenizer.decode(generated[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
summary = str(tgt_text).strip('[]""')
#Summary: This barchart shows the number of trade statistics of qatar with developing economies in north africa from 2000 through 2004. The unit of measurement in this graph is Trade with economies of Middle East & North Africa(%) as shown on the y-axis. The first group data denotes the change of Merchandise exports. There is a go up and down trend of the number. The peak of the number is found in 2002 and the lowest number is found in 2001. The changes in the number may be related to the conuntry's national policies. The second group data denotes the change of Merchandise imports. There is a go up and down trend of the number. The number in 2000 being the peak, and the lowest number is found in 2003. The changes in the number may be related to the conuntry's national policies.
```
# Limitations
You can use the model to generate summaries of data files.
Works well for general statistics like the following:
| Year | Children born per woman |
|:---:|:---:|
| 2018 | 1.14 |
| 2017 | 1.45 |
| 2016 | 1.49 |
| 2015 | 1.54 |
| 2014 | 1.6 |
| 2013 | 1.65 |
May or may not generate an **okay** summary at best for the following kind of data:
| Model | BLEU score | BLEURT|
|:---:|:---:|:---:|
| t5-small | 25.4 | -0.11 |
| t5-base | 28.2 | 0.12 |
| t5-large | 35.4 | 0.34 |
# Citation
Kindly cite my work. Thank you.
```
@misc{obaid ul islam_2022,
title={saadob12/t5_C2T_autochart Hugging Face},
url={https://huggingface.co/saadob12/t5_C2T_autochart},
journal={Huggingface.co},
author={Obaid ul Islam, Saad},
year={2022}
}
``` |
BitanBiswas/depression-detection-bert | 189e97418e7f668157f10353b3dc4477f91feb8c | 2022-07-09T03:13:46.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | BitanBiswas | null | BitanBiswas/depression-detection-bert | 20 | null | transformers | 8,463 | Entry not found |
Ahmed007/distilbert-base-uncased-finetuned-emotion | 1ccaabaac4388c0a8def8917f13b75b807272baf | 2022-07-11T23:04:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Ahmed007 | null | Ahmed007/distilbert-base-uncased-finetuned-emotion | 20 | 1 | transformers | 8,464 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.937
- name: F1
type: f1
value: 0.9372331942198677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1413
- Accuracy: 0.937
- F1: 0.9372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7628 | 1.0 | 250 | 0.2489 | 0.9155 | 0.9141 |
| 0.2014 | 2.0 | 500 | 0.1716 | 0.928 | 0.9283 |
| 0.1351 | 3.0 | 750 | 0.1456 | 0.937 | 0.9374 |
| 0.1046 | 4.0 | 1000 | 0.1440 | 0.9355 | 0.9349 |
| 0.0877 | 5.0 | 1250 | 0.1413 | 0.937 | 0.9372 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
MiguelCosta/finetuning-sentiment-model-3000-samples | 843a87b16f04f5f19dc9735f6506db9fdbccdda9 | 2022-07-12T06:06:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | MiguelCosta | null | MiguelCosta/finetuning-sentiment-model-3000-samples | 20 | null | transformers | 8,465 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8810289389067525
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5805
- Accuracy: 0.8767
- F1: 0.8810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
helena-balabin/qt-simcse-roberta-large | f699253ca4fc453c70b12257e5abc848b4b754a3 | 2022-07-13T09:33:52.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | helena-balabin | null | helena-balabin/qt-simcse-roberta-large | 20 | null | transformers | 8,466 | Entry not found |
hirohiroz/wav2vec2-base-timit-demo-google-colab-tryjpn | 4c0fa2774bb59db4861cbec1ab370efce2efe80a | 2022-07-19T08:16:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hirohiroz | null | hirohiroz/wav2vec2-base-timit-demo-google-colab-tryjpn | 20 | null | transformers | 8,467 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab-tryjpn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab-tryjpn
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1527
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 48.3474 | 6.67 | 100 | 68.0887 | 1.0 |
| 7.601 | 13.33 | 200 | 8.3667 | 1.0 |
| 4.9107 | 20.0 | 300 | 5.6991 | 1.0 |
| 4.379 | 26.67 | 400 | 5.1527 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 1.18.3
- Tokenizers 0.12.1
|
uer/roberta-tiny-wwm-chinese-cluecorpussmall | 36ecd8c2b96921ec25e5a92aa57d44bc796f4d11 | 2022-07-18T05:35:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/roberta-tiny-wwm-chinese-cluecorpussmall | 20 | null | transformers | 8,468 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese Whole Word Masking RoBERTa Miniatures
## Model description
This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **Tiny** | [**2/128 (Tiny)**][2_128] |
| **Mini** | [**4/256 (Mini)**][4_256] |
| **Small** | [**4/512 (Small)**][4_512] |
| **Medium** | [**8/512 (Medium)**][8_512] |
| **Base** | [**12/768 (Base)**][12_768] |
| **Large** | [**24/1024 (Large)**][24_1024] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny-WWM | 72.1 | 82.8 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 |
| RoBERTa-Mini-WWM | 76.1 | 84.9 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 |
| RoBERTa-Small-WWM | 77.3 | 86.8 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 |
| RoBERTa-Medium-WWM | 78.4 | 88.2 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 |
| RoBERTa-Base-WWM | 80.1 | 90.0 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 |
| RoBERTa-Large-WWM | 81.0 | 90.4 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall')
>>> unmasker("北京是[MASK]国的首都。")
[
{'score': 0.294228732585907,
'token': 704,
'token_str': '中',
'sequence': '北 京 是 中 国 的 首 都 。'},
{'score': 0.19691626727581024,
'token': 1266,
'token_str': '北',
'sequence': '北 京 是 北 国 的 首 都 。'},
{'score': 0.1070084273815155,
'token': 7506,
'token_str': '韩',
'sequence': '北 京 是 韩 国 的 首 都 。'},
{'score': 0.031527262181043625,
'token': 2769,
'token_str': '我',
'sequence': '北 京 是 我 国 的 首 都 。'},
{'score': 0.023054633289575577,
'token': 1298,
'token_str': '南',
'sequence': '北 京 是 南 国 的 首 都 。'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
[jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool.
Taking the case of Whole Word Masking RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--whole_word_masking \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--whole_word_masking \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall
[4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall
[4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall
[8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall
[12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall
[24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall |
ChuVN/bart-base-finetuned-squad2 | 2b047428c2cb0811fb6ae16ee69f49d9081b3469 | 2022-07-18T17:00:01.000Z | [
"pytorch",
"tensorboard",
"bart",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | ChuVN | null | ChuVN/bart-base-finetuned-squad2 | 20 | null | transformers | 8,469 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bart-base-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-squad2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9981 | 1.0 | 16319 | 0.9607 |
| 0.7521 | 2.0 | 32638 | 1.0446 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
anahitapld/dbd_bert | 51fe60b82bcd7cff9bc3114855e54a09048140c5 | 2022-07-18T09:00:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | anahitapld | null | anahitapld/dbd_bert | 20 | null | transformers | 8,470 | ---
license: apache-2.0
---
|
Evelyn18/roberta-base-spanish-squades-becas1 | ad991e3086ad24101fa5eef917897666f838de86 | 2022-07-18T23:21:45.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-becas1 | 20 | null | transformers | 8,471 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becas1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-becas1
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 1.8851 |
| No log | 2.0 | 12 | 1.7681 |
| No log | 3.0 | 18 | 2.0453 |
| No log | 4.0 | 24 | 2.2795 |
| No log | 5.0 | 30 | 2.4024 |
| No log | 6.0 | 36 | 2.4402 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Anonymous-TST/knight-errant-TST-zh | 67eaaf77b9ef72f007a06e0b3b5104652c733732 | 2022-07-19T10:41:47.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl",
"transformers",
"mbart-50",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | Anonymous-TST | null | Anonymous-TST/knight-errant-TST-zh | 20 | null | transformers | 8,472 | ---
language:
- multilingual
- ar
- cs
- de
- en
- es
- et
- fi
- fr
- gu
- hi
- it
- ja
- kk
- ko
- lt
- lv
- my
- ne
- nl
- ro
- ru
- si
- tr
- vi
- zh
- af
- az
- bn
- fa
- he
- hr
- id
- ka
- km
- mk
- ml
- mn
- mr
- pl
- ps
- pt
- sv
- sw
- ta
- te
- th
- tl
- uk
- ur
- xh
- gl
- sl
license: mit
tags:
- mbart-50
---
# Knight-errant
Knight-errant is a test style transfer model for knight-errant style.
```python
#inference
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("Anonymous-TST/knight-errant-TST-zh")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="zh_CN", tgt_lang="zh_CN")
model.cuda()
model.eval()
article_1 = "jinyong: 接下来会发生什么?"
batch = tokenizer(article_1, return_tensors="pt",return_token_type_ids=False, truncation=True, max_length=64, padding=True).to('cuda')
translated_tokens = model.generate(**batch,max_length=64)
decoded = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(decoded)
```
``` |
johanna-k/pw-canine-ame | e66adc66093c7ed192e23a454feb3a6c6d1ba767 | 2022-07-21T05:08:39.000Z | [
"pytorch",
"canine",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | johanna-k | null | johanna-k/pw-canine-ame | 20 | null | transformers | 8,473 | Entry not found |
shamweel/bert-finetuned-ner | d7c53737adccbff38c68818061cafd99710e9c39 | 2022-07-22T17:18:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | shamweel | null | shamweel/bert-finetuned-ner | 20 | null | transformers | 8,474 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9312510328871261
- name: Recall
type: recall
value: 0.9483338943116796
- name: F1
type: f1
value: 0.9397148336529643
- name: Accuracy
type: accuracy
value: 0.9857096603284865
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0684
- Precision: 0.9313
- Recall: 0.9483
- F1: 0.9397
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0877 | 1.0 | 1756 | 0.0676 | 0.9142 | 0.9357 | 0.9248 | 0.9828 |
| 0.0411 | 2.0 | 3512 | 0.0633 | 0.9258 | 0.9492 | 0.9373 | 0.9856 |
| 0.0198 | 3.0 | 5268 | 0.0684 | 0.9313 | 0.9483 | 0.9397 | 0.9857 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SummerChiam/pond | 984a641c51257984385f0884b8ee024379cb0a7d | 2022-07-23T07:47:49.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/pond | 20 | null | transformers | 8,475 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9909297227859497
---
# pond
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae0

#### Boiling0

#### BoilingNight0

#### Normal0

#### NormalCement0

#### NormalNight0

#### NormalRain0
 |
xander-cross/DialoGPT-small-EvilMortyTheBot | 3886d297e497a8197b69a3f8e245f51ca068acdd | 2022-07-23T17:16:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | xander-cross | null | xander-cross/DialoGPT-small-EvilMortyTheBot | 20 | null | transformers | 8,476 | ---
tags:
- conversational
---
# DialoGPT-small-EvilMortyTheBot |
SIMAS-UN/blaming_infrastructure | 2fc91fc66c1871c774c32ec5760c8c7ac9bea711 | 2022-07-24T04:04:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | SIMAS-UN | null | SIMAS-UN/blaming_infrastructure | 20 | null | transformers | 8,477 | Entry not found |
SIMAS-UN/blaming_locals | bf4f746ad4abca21c8c49ba8a3c94d7a635f5f59 | 2022-07-24T04:10:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | SIMAS-UN | null | SIMAS-UN/blaming_locals | 20 | null | transformers | 8,478 | Entry not found |
Daveee/gpl_colbert | b7b41eaffb0f8cd729c50f28edddcb49e1f2fb47 | 2022-07-24T15:26:10.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | Daveee | null | Daveee/gpl_colbert | 20 | null | sentence-transformers | 8,479 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 100 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
swtx/Erlangshen-Roberta-110M-Similarity | 128a1cca1a8fa56b6933a120e6809759072b398b | 2022-07-25T06:46:00.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"transformers",
"NLU",
"NLI",
"license:apache-2.0"
] | text-classification | false | swtx | null | swtx/Erlangshen-Roberta-110M-Similarity | 20 | null | transformers | 8,480 | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- NLI
inference: true
widget:
- text: "今天心情不好[SEP]今天很开心"
---
# Erlangshen-Roberta-110M-Similarity, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
We collect 20 paraphrace datasets in the Chinese domain for finetune, with a total of 2773880 samples. Our model is mainly based on [roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large)
## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-Similarity')
model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-110M-Similarity')
texta='今天的饭不好吃'
textb='今天心情不好'
output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
## Scores on downstream chinese tasks(The dev datasets of BUSTM and AFQMC may exist in the train set)
| Model | BQ | BUSTM | AFQMC |
| :--------: | :-----: | :----: | :-----: |
| Erlangshen-Roberta-110M-Similarity | 85.41 | 95.18 | 81.72 |
| Erlangshen-Roberta-330M-Similarity | 86.21 | 99.29 | 93.89 |
| Erlangshen-MegatronBert-1.3B-Similarity | 86.31 | - | - |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
nielsr/donut-base-finetuned-rvlcdip | c6d48f8aaf8b28e6bb25e36ba3c9eef06a9f9492 | 2022-07-26T09:46:46.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | nielsr | null | nielsr/donut-base-finetuned-rvlcdip | 20 | null | transformers | 8,481 | Entry not found |
AnonymousSub/recipes-roberta-base-no-ingr | 0d4065c1edc970feefd263a7bcb822f2b6f43ad6 | 2022-07-25T13:54:17.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AnonymousSub | null | AnonymousSub/recipes-roberta-base-no-ingr | 20 | null | transformers | 8,482 | Entry not found |
jcashmoney123/test-model | 6ad4463fcd081bccdc030ca767b033b34b4658ec | 2022-07-25T16:16:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"unk",
"dataset:jcashmoney123/autotrain-data-test-summarization",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | jcashmoney123 | null | jcashmoney123/test-model | 20 | null | transformers | 8,483 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- jcashmoney123/autotrain-data-test-summarization
co2_eq_emissions: 6.160395825083539
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1177143826
- CO2 Emissions (in grams): 6.160395825083539
## Validation Metrics
- Loss: 2.9017226696014404
- Rouge1: 21.6224
- Rouge2: 5.6481
- RougeL: 19.0725
- RougeLsum: 19.1428
- Gen Len: 12.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/jcashmoney123/autotrain-test-summarization-1177143826
``` |
Gaborandi/Clinical-Longformer-Cardiology | e4296b73c804b9079eb5fd41c12dd28004f75344 | 2022-07-29T02:14:50.000Z | [
"pytorch",
"tensorboard",
"longformer",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Gaborandi | null | Gaborandi/Clinical-Longformer-Cardiology | 20 | null | transformers | 8,484 | ---
tags:
- generated_from_trainer
model-index:
- name: Clinical-Longformer-Cardiology
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clinical-Longformer-Cardiology
This model is a fine-tuned version of [yikuan8/Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2764 | 1.0 | 492 | 1.8319 |
| 1.8285 | 2.0 | 984 | 1.6720 |
| 1.7608 | 3.0 | 1476 | 1.5980 |
| 1.6622 | 4.0 | 1968 | 1.5597 |
| 1.6382 | 5.0 | 2460 | 1.5084 |
| 1.5846 | 6.0 | 2952 | 1.5037 |
| 1.5755 | 7.0 | 3444 | 1.4781 |
| 1.5404 | 8.0 | 3936 | 1.4673 |
| 1.5399 | 9.0 | 4428 | 1.4631 |
| 1.5287 | 10.0 | 4920 | 1.4640 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
BSC-TeMU/roberta-base-bne-capitel-ner | 927664f3c91af5dc86ac070000e3886b0d789a9e | 2021-10-21T10:29:35.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | BSC-TeMU | null | BSC-TeMU/roberta-base-bne-capitel-ner | 19 | 1 | transformers | 8,485 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
---
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
## Evaluation and results
F1 Score: 0.8960
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BSC-TeMU/roberta-base-ca | 38504df62571a5ee14b1ff9e15af6abb98795fb0 | 2021-10-21T10:30:50.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ca",
"transformers",
"masked-lm",
"BERTa",
"catalan",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | BSC-TeMU | null | BSC-TeMU/roberta-base-ca | 19 | 3 | transformers | 8,486 | ---
language: "ca"
tags:
- masked-lm
- BERTa
- catalan
widget:
- text: "El Català és una llengua molt <mask>."
- text: "Salvador Dalí va viure a <mask>."
- text: "La Costa Brava té les millors <mask> d'Espanya."
- text: "El cacaolat és un batut de <mask>."
- text: "<mask> és la capital de la Garrotxa."
- text: "Vaig al <mask> a buscar bolets."
- text: "Antoni Gaudí vas ser un <mask> molt important per la ciutat."
- text: "Catalunya és una referència en <mask> a nivell europeu."
license: apache-2.0
---
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca
# BERTa: RoBERTa-based Catalan language model
## BibTeX citation
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
## Model description
BERTa is a transformer-based masked language model for the Catalan language.
It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) base model
and has been trained on a medium-size corpus collected from publicly available corpora and crawlers.
## Training corpora and preprocessing
The training corpus consists of several corpora gathered from web crawling and public corpora.
The publicly available corpora are:
1. the Catalan part of the [DOGC](http://opus.nlpl.eu/DOGC-v2.php) corpus, a set of documents from the Official Gazette of the Catalan Government
2. the [Catalan Open Subtitles](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2018/mono/OpenSubtitles.raw.ca.gz), a collection of translated movie subtitles
3. the non-shuffled version of the Catalan part of the [OSCAR](https://traces1.inria.fr/oscar/) corpus \\\\cite{suarez2019asynchronous},
a collection of monolingual corpora, filtered from [Common Crawl](https://commoncrawl.org/about/)
4. The [CaWac](http://nlp.ffzg.hr/resources/corpora/cawac/) corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013
the non-deduplicated version
5. the [Catalan Wikipedia articles](https://ftp.acc.umu.se/mirror/wikimedia.org/dumps/cawiki/20200801/) downloaded on 18-08-2020.
The crawled corpora are:
6. The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains
7. the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government
8. the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the [Catalan News Agency](https://www.acn.cat/)
To obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others,
sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents.
During the process, we keep document boundaries are kept.
Finally, the corpora are concatenated and further global deduplication among the corpora is applied.
The final training corpus consists of about 1,8B tokens.
## Tokenization and pretraining
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens.
The BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model
with the same hyperparameters as in the original work.
The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM.
## Evaluation
## CLUB benchmark
The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB),
that has been created along with the model.
It contains the following tasks and their related datasets:
1. Part-of-Speech Tagging (POS)
Catalan-Ancora: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus
2. Named Entity Recognition (NER)
**[AnCora Catalan 2.0.0](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version,
filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format
3. Text Classification (TC)
**[TeCla](https://doi.org/10.5281/zenodo.4627197)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus
4. Semantic Textual Similarity (STS)
**[Catalan semantic textual similarity](https://doi.org/10.5281/zenodo.4529183)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them,
scraped from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349)
5. Question Answering (QA):
**[ViquiQuAD](https://doi.org/10.5281/zenodo.4562344)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.
**[XQuAD](https://doi.org/10.5281/zenodo.4526223)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_
Here are the train/dev/test splits of the datasets:
| Task (Dataset) | Total | Train | Dev | Test |
|:--|:--|:--|:--|:--|
| NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 |
| POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 |
| STS | 3,073 | 2,073 | 500 | 500 |
| TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786|
| QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 |
_The fine-tuning on downstream tasks have been performed with the HuggingFace [**Transformers**](https://github.com/huggingface/transformers) library_
## Results
Below the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and
the Catalan WikiBERT-ca model
| Task | NER (F1) | POS (F1) | STS (Pearson) | TC (accuracy) | QA (ViquiQuAD) (F1/EM) | QA (XQuAD) (F1/EM) |
| ------------|:-------------:| -----:|:------|:-------|:------|:----|
| BERTa | **88.13** | **98.97** | **79.73** | **74.16** | **86.97/72.29** | **68.89/48.87** |
| mBERT | 86.38 | 98.82 | 76.34 | 70.56 | 86.97/72.22 | 67.15/46.51 |
| XLM-RoBERTa | 87.66 | 98.89 | 75.40 | 71.68 | 85.50/70.47 | 67.10/46.42 |
| WikiBERT-ca | 77.66 | 97.60 | 77.18 | 73.22 | 85.45/70.75 | 65.21/36.60 |
## Intended uses & limitations
The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section)
However, the is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition.
---
## Using BERTa
## Load model and tokenizer
``` python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-ca-cased")
model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-ca-cased")
```
## Fill Mask task
Below, an example of how to use the masked language modelling task with a pipeline.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='BSC-TeMU/roberta-base-ca-cased')
>>> unmasker("Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.")
[
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.4177263379096985,
"token": 734,
"token_str": " Barcelona"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.10696165263652802,
"token": 3849,
"token_str": " Badalona"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.08135009557008743,
"token": 19349,
"token_str": " Collserola"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.07330769300460815,
"token": 4974,
"token_str": " Terrassa"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.03317456692457199,
"token": 14333,
"token_str": " Gavà"
}
]
```
This model was originally published as [bsc/roberta-base-ca-cased](https://huggingface.co/bsc/roberta-base-ca-cased). |
CenIA/bert-base-spanish-wwm-cased-finetuned-pos | 446a9edd572f3387723477b47917ebfb25f80da0 | 2021-12-18T00:41:41.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/bert-base-spanish-wwm-cased-finetuned-pos | 19 | null | transformers | 8,487 | Entry not found |
Connor-tech/bert_cn_finetuning | aa18d8e9416e963268628e7a410b4ebd1e550bf7 | 2021-05-18T17:47:09.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Connor-tech | null | Connor-tech/bert_cn_finetuning | 19 | null | transformers | 8,488 | Entry not found |
Davlan/xlm-roberta-base-finetuned-hausa | fb4b91e95a97c2d601bb84188af54283b7b99b40 | 2021-05-28T14:07:31.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"ha",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-hausa | 19 | null | transformers | 8,489 | Hugging Face's logo
---
language: ha
datasets:
---
# xlm-roberta-base-finetuned-hausa
## Model description
**xlm-roberta-base-finetuned-hausa** is a **Hausa RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Hausa language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Hausa corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-hausa')
>>> unmasker("Shugaban <mask> Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci")
[{'sequence': '<s> Shugaban kasa Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>',
'score': 0.8104371428489685,
'token': 29762,
'token_str': '▁kasa'},
{'sequence': '<s> Shugaban Najeriya Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.17371904850006104,
'token': 49173,
'token_str': '▁Najeriya'},
{'sequence': '<s> Shugaban kasar Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.006917025428265333,
'token': 21221,
'token_str': '▁kasar'},
{'sequence': '<s> Shugaban Nigeria Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.005785710643976927,
'token': 72620,
'token_str': '▁Nigeria'},
{'sequence': '<s> Shugaban Kasar Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci</s>', 'score': 0.0010596115607768297,
'token': 170255,
'token_str': '▁Kasar'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Hausa CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | ha_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.10 | 91.47
[VOA Hausa Textclass](https://huggingface.co/datasets/hausa_voa_topics) | |
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-naija | b540dd60b5f2d320dfbf65ee834be8f69eabc0f3 | 2021-06-15T21:33:37.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"pcm",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-naija | 19 | null | transformers | 8,490 | Hugging Face's logo
---
language: pcm
datasets:
---
# xlm-roberta-base-finetuned-naija
## Model description
**xlm-roberta-base-finetuned-naija** is a **Nigerian Pidgin RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Nigerian Pidgin language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Nigerian Pidgin corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-naija')
>>> unmasker("Another attack on ambulance happen for Koforidua in March <mask> year where robbers kill Ambulance driver")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | pcm_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.26 | 90.00
### BibTeX entry and citation info
By David Adelani
```
```
|
Emran/ClinicalBERT_ICD10_Full | c87c2431852fb908d39deb80c2f14a6672c85670 | 2021-10-12T17:27:57.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | Emran | null | Emran/ClinicalBERT_ICD10_Full | 19 | 1 | transformers | 8,491 | Entry not found |
Geotrend/distilbert-base-pl-cased | b676d1cb3c2ba3eec416e33b89fd6b03032ba25f | 2021-07-28T21:03:56.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"pl",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-pl-cased | 19 | null | transformers | 8,492 | ---
language: pl
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-pl-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-pl-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-pl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-vi-cased | 5279ece929309f2b4e369883642fdffd6e093513 | 2021-08-16T13:31:30.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"vi",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-vi-cased | 19 | null | transformers | 8,493 | ---
language: vi
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-vi-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-vi-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Ghana-NLP/robako-base-asante-twi-uncased | 2084629545d1f5371079a9174ca4e1d81db06195 | 2021-05-20T11:54:32.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Ghana-NLP | null | Ghana-NLP/robako-base-asante-twi-uncased | 19 | null | transformers | 8,494 | Entry not found |
Graphcore/bert-large-uncased-squad | ab8b9ad8bbb7ef560cce0c6b6996668cfd4430da | 2022-05-25T18:35:33.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Graphcore | null | Graphcore/bert-large-uncased-squad | 19 | 2 | transformers | 8,495 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: Graphcore/bert-large-uncased-squad
results: []
---
# Graphcore/bert-large-uncased-squad
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
This model is a fine-tuned version of [Graphcore/bert-large-uncased](https://huggingface.co/Graphcore/bert-large-uncased) on the SQuAD dataset.
## Training and evaluation data
Trained on SQuAD dataset:
- [HuggingFace/squad](https://huggingface.co/datasets/squad)
## Training procedure
Model was trained on 16 Graphcore Mk2 IPUs using the [optimum-graphcore](https://github.com/huggingface/optimum-graphcore) library.
|
Helsinki-NLP/opus-mt-cpp-cpp | b0189f62a8d53c5e9509b51ce1c3da0f6d45de90 | 2021-01-18T07:54:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"id",
"cpp",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cpp-cpp | 19 | null | transformers | 8,496 | ---
language:
- id
- cpp
tags:
- translation
license: apache-2.0
---
### cpp-cpp
* source group: Creoles and pidgins, Portuguese-based
* target group: Creoles and pidgins, Portuguese-based
* OPUS readme: [cpp-cpp](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-cpp/README.md)
* model: transformer
* source language(s): ind pap
* target language(s): ind pap
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa-msa.msa.msa | 0.7 | 0.149 |
| Tatoeba-test.msa-pap.msa.pap | 31.7 | 0.577 |
| Tatoeba-test.multi.multi | 21.1 | 0.369 |
| Tatoeba-test.pap-msa.pap.msa | 17.7 | 0.197 |
### System Info:
- hf_name: cpp-cpp
- source_languages: cpp
- target_languages: cpp
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-cpp/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['id', 'cpp']
- src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'}
- tgt_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.test.txt
- src_alpha3: cpp
- tgt_alpha3: cpp
- short_pair: cpp-cpp
- chrF2_score: 0.369
- bleu: 21.1
- brevity_penalty: 0.882
- ref_len: 18.0
- src_name: Creoles and pidgins, Portuguese-based
- tgt_name: Creoles and pidgins, Portuguese-based
- train_date: 2020-07-26
- src_alpha2: cpp
- tgt_alpha2: cpp
- prefer_old: False
- long_pair: cpp-cpp
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-pap | 0f0260d873e31a23dd6ca857f866e37f26233fc0 | 2021-09-09T21:38:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"pap",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-pap | 19 | null | transformers | 8,497 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-pap
* source languages: en
* target languages: pap
* OPUS readme: [en-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pap/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pap/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pap/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pap | 40.1 | 0.586 |
| Tatoeba.en.pap | 52.8 | 0.665 |
|
Helsinki-NLP/opus-mt-en-sm | 7d94b8a5cb7369edbb896671ee68ce7078e1fca2 | 2021-09-09T21:39:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"sm",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sm | 19 | null | transformers | 8,498 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sm
* source languages: en
* target languages: sm
* OPUS readme: [en-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sm/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sm/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sm/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sm/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sm | 40.1 | 0.585 |
|
Helsinki-NLP/opus-mt-en-ty | 557a92fc13de4419ba0e6130b45e7ab1603f1025 | 2021-09-09T21:40:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ty",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ty | 19 | null | transformers | 8,499 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ty
* source languages: en
* target languages: ty
* OPUS readme: [en-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ty | 46.8 | 0.619 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.