modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
softcatala/fullstop-catalan-punctuation-prediction | 4b26218160a6efed8b99f7f5167269f57dcc25aa | 2022-04-06T12:45:54.000Z | [
"pytorch",
"roberta",
"token-classification",
"ca",
"dataset:softcatala/Europarl-catalan",
"transformers",
"punctuation prediction",
"punctuation",
"autotrain_compatible"
] | token-classification | false | softcatala | null | softcatala/fullstop-catalan-punctuation-prediction | 204 | null | transformers | 3,600 | ---
language:
- ca
tags:
- punctuation prediction
- punctuation
datasets: softcatala/Europarl-catalan
widget:
- text: "Els investigadors suggereixen que tot i que es tracta de la cua d'un dinosaure jove la mostra revela un plomatge adult i no pas plomissol"
example_title: "Catalan"
metrics:
- f1
---
This model predicts the punctuation of Catalan language.
The model restores the following punctuation markers: **"." "," "?" "-" ":"**
Based on the work https://github.com/oliverguhr/fullstop-deep-punctuation-prediction
## Results
The performance differs for the single punctuation markers as hyphens and colons, in many cases, are optional and can be substituted by either a comma or a full stop. The model achieves the following F1 scores for Catalan language:
| Label | CA |
| ------------- | ----- |
| 0 | 0.99 |
| . | 0.93 |
| , | 0.82 |
| ? | 0.76 |
| - | 0.89 |
| : | 0.64 |
| macro average | 0.84 |
## Contact
Jordi Mas <[email protected]>
|
EMBO/BioMegatron345mUncased | ab9ac883b103dbb83e55f3b7a416fffcebff4e1b | 2022-07-26T06:50:35.000Z | [
"pytorch",
"megatron-bert",
"en",
"arxiv:2010.06060",
"transformers",
"language model",
"license:cc-by-4.0"
] | null | false | EMBO | null | EMBO/BioMegatron345mUncased | 204 | 1 | transformers | 3,601 | ---
license: cc-by-4.0
language:
- en
thumbnail:
tags:
- language model
---
!---
# ##############################################################################################
#
# This model has been uploaded to HuggingFace by https://huggingface.co/drAbreu
# The model is based on the NVIDIA checkpoint located at
# https://catalog.ngc.nvidia.com/orgs/nvidia/models/biomegatron345muncased
#
# ##############################################################################################
-->
[BioMegatron](https://arxiv.org/pdf/2010.06060.pdf) is a transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model trained on top of the Megatron-LM model, adding a PubMed corpusto the Megatron-LM corpora(Wikipedia, RealNews, OpenWebText, and CC-Stories). BioMegatron follows a similar (albeit not identical) architecture as BERT and it has 345 million parameters:
* 24 layers
* 16 attention heads with a hidden size of 1024.
More information available at [nVIDIA NGC CATALOG](https://catalog.ngc.nvidia.com/orgs/nvidia/models/biomegatron345muncased)
# Running BioMegatron in 🤗 transformers
In this implementation we have followed the commands of the [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-uncased-345m) repository to make BioMegatron available in 🤗.
However, the file [`convert_megatron_bert_checkpoint.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py) needed a modification. The reason is that the Megatron model shown in [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-uncased-345m) has included head layers, while the weights of the BioMegatron model that we upload to this repository do not contain a head.
We provide in the repository an alternative version of the [python script](https://huggingface.co/EMBO/BioMegatron345mUncased/blob/main/convert_biomegatron_checkpoint.py) in order to any user to cross-check the validity of the model replicated in this repository.
The code below is a modification of the original [`convert_megatron_bert_checkpoint.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py).
```python
import os
import torch
from convert_biomegatron_checkpoint import convert_megatron_checkpoint
print_checkpoint_structure = True
path_to_checkpoint = "/path/to/BioMegatron345mUncased/"
# Extract the basename.
basename = os.path.dirname(path_to_checkpoint).split('/')[-1]
# Load the model.
input_state_dict = torch.load(os.path.join(path_to_checkpoint, 'model_optim_rng.pt'), map_location="cpu")
# Convert.
print("Converting")
output_state_dict, output_config = convert_megatron_checkpoint(input_state_dict, head_model=False)
# Print the structure of converted state dict.
if print_checkpoint_structure:
recursive_print(None, output_state_dict)
# Store the config to file.
output_config_file = os.path.join(path_to_checkpoint, "config.json")
print(f'Saving config to "{output_config_file}"')
with open(output_config_file, "w") as f:
json.dump(output_config, f)
# Store the state_dict to file.
output_checkpoint_file = os.path.join(path_to_checkpoint, "pytorch_model.bin")
print(f'Saving checkpoint to "{output_checkpoint_file}"')
torch.save(output_state_dict, output_checkpoint_file)
```
BioMegatron can be run with the standard 🤗 script for loading models. Here we show an example identical to that of [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-uncased-345m).
```python
import os
import torch
from transformers import BertTokenizer, MegatronBertForMaskedLM, AutoModelForMaskedLM
checkpoint = "EMBO/BioMegatron345mUncased"
# The tokenizer. Megatron was trained with standard tokenizer(s).
tokenizer = BertTokenizer.from_pretrained(checkpoint)
# Load the model from $MYDIR/nvidia/megatron-bert-uncased-345m.
model = AutoModelForMaskedLM.from_pretrained(checkpoint)
device = torch.device("cpu")
# Create inputs (from the BERT example page).
input = tokenizer("The capital of France is [MASK]", return_tensors="pt").to(device)
label = tokenizer("The capital of France is Paris", return_tensors="pt")["input_ids"].to(device)
# Run the model.
with torch.no_grad():
output = model(**input, labels=label)
print(output)
```
# Limitations
This implementation has not been fine-tuned in any task. It has only the weights of the official nVIDIA checkpoint. It needs to be trained to perform any downstream task.
# Original code
The original code for Megatron can be found here: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
|
SharpAI/mal_tls | 735a8d55f96d3bbbb035834ed8c82df7c8ba9ae2 | 2022-07-27T18:04:04.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"transformers",
"generated_from_keras_callback",
"model-index"
] | text-classification | false | SharpAI | null | SharpAI/mal_tls | 204 | null | transformers | 3,602 | ---
tags:
- generated_from_keras_callback
model-index:
- name: mal_tls
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal_tls
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus | 754e83ed79884f3ce74674711f002d95ae5f065e | 2022-04-06T14:43:21.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | PlanTL-GOB-ES | null | PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus | 203 | 4 | transformers | 3,603 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
inference:
parameters:
aggregation_strategy: "first"
---
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
**IMPORTANT ABOUT THIS MODEL:** We modified the dataset to make this model more robust to general Spanish input. In the Spanish language all the name entities are capitalized, as this dataset has been elaborated by experts, it is totally correct in terms of Spanish language. We randomly took some entities and we lower-cased some of them for the model to learn not only that the named entities are capitalized, but also the structure of a sentence that should contain a named entity. For instance: "My name is [placeholder]", this [placeholder] should be a named entity even though it is not written capitalized. The model trained on the original capitel dataset can be found here: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner
Examples:
This model:
- "Me llamo asier y vivo en barcelona todo el año." → "Me llamo {asier:S-PER} y vivo en {barcelona:S-LOC} todo el año."
- "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." → "Hoy voy a visitar el {park:B-LOC} {güell:E-LOC} tras salir del {barcelona:B-ORG} {supercomputing:I-ORG} {center:E-ORG}."
Model trained on original data:
- "Me llamo asier y vivo en barcelona todo el año." → "Me llamo asier y vivo en barcelona todo el año." (nothing)
- "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." → "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." (nothing)
## Evaluation and results
F1 Score: 0.8867
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@article{gutierrezfandino2022,
author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
title = {MarIA: Spanish Language Models},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
pages = {39--60}
}
```
## Funding
This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL, and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020).
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. |
google/vit-large-patch16-384 | 4b143e77059a54c70b348a76677ab9946f584e13 | 2022-01-28T10:22:26.000Z | [
"pytorch",
"tf",
"jax",
"vit",
"image-classification",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | google | null | google/vit-large-patch16-384 | 203 | 2 | transformers | 3,604 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-21k
---
# Vision Transformer (large-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch16-384')
model = ViTForImageClassification.from_pretrained('google/vit-large-patch16-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change.
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
mrm8488/bert2bert_shared-spanish-finetuned-paus-x-paraphrasing | e5fcd441a0b0e6fc55c87b3c89915623941a9426 | 2021-07-31T05:12:47.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"es",
"dataset:pausx",
"transformers",
"spanish",
"paraphrasing",
"paraphrase",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/bert2bert_shared-spanish-finetuned-paus-x-paraphrasing | 203 | 2 | transformers | 3,605 | ---
language: es
datasets:
- pausx
tags:
- spanish
- paraphrasing
- paraphrase
widget:
- text: "El pionero suizo John Sutter (1803-1880) llegó a Alta California con otros colonos euroamericanos en agosto de 1839."
---
# Spanish Bert2Bert (shared) fine-tuned on PAUS-X es for paraphrasing |
HooshvareLab/bert-fa-base-uncased-sentiment-digikala | 88db8245349b36b47ef1b92e49b8b80428b77d7c | 2021-05-18T20:59:17.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"transformers",
"license:apache-2.0"
] | text-classification | false | HooshvareLab | null | HooshvareLab/bert-fa-base-uncased-sentiment-digikala | 202 | null | transformers | 3,606 | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### Digikala
Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels:
| Label | # |
|:---------------:|:------:|
| no_idea | 10394 |
| not_recommended | 15885 |
| recommended | 36042 |
**Download**
You can download the dataset from [here](https://www.digikala.com/opendata/)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| Digikala User Comments | 81.72 | 81.74* | 80.74 | - |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. |
abhi1nandy2/EManuals_RoBERTa | b6fe473485f493db94fe67b8fa9d62bd3d0a43b9 | 2022-05-04T04:57:53.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"English",
"transformers",
"EManuals",
"customer support",
"QA"
] | feature-extraction | false | abhi1nandy2 | null | abhi1nandy2/EManuals_RoBERTa | 202 | null | transformers | 3,607 | ---
language:
- English
tags:
- EManuals
- customer support
- QA
- roberta
---
Refer to https://aclanthology.org/2021.findings-emnlp.392/ for the paper and https://sites.google.com/view/emanualqa/home for the project website
## Citation
Please cite the work if you would like to use it.
```
@inproceedings{nandy-etal-2021-question-answering,
title = "Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based {QA} Framework",
author = "Nandy, Abhilash and
Sharma, Soumya and
Maddhashiya, Shubham and
Sachdeva, Kapil and
Goyal, Pawan and
Ganguly, NIloy",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.392",
doi = "10.18653/v1/2021.findings-emnlp.392",
pages = "4600--4609",
abstract = "Answering questions asked from instructional corpora such as E-manuals, recipe books, etc., has been far less studied than open-domain factoid context-based question answering. This can be primarily attributed to the absence of standard benchmark datasets. In this paper, we meticulously create a large amount of data connected with E-manuals and develop a suitable algorithm to exploit it. We collect E-Manual Corpus, a huge corpus of 307,957 E-manuals, and pretrain RoBERTa on this large corpus. We create various benchmark QA datasets which include question answer pairs curated by experts based upon two E-manuals, real user questions from Community Question Answering Forum pertaining to E-manuals etc. We introduce EMQAP (E-Manual Question Answering Pipeline) that answers questions pertaining to electronics devices. Built upon the pretrained RoBERTa, it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying the section in the E-manual where the answer can be found and the exact answer span within that section. For E-Manual annotated question-answer pairs, we show an improvement of about 40{\%} in ROUGE-L F1 scores over most competitive baseline. We perform a detailed ablation study and establish the versatility of EMQAP across different circumstances. The code and datasets are shared at https://github.com/abhi1nandy2/EMNLP-2021-Findings, and the corresponding project website is https://sites.google.com/view/emanualqa/home.",
}
``` |
af-ai-center/bert-base-swedish-uncased | 5c7fb9dbad916c7c9738751e8ee117b186f7da91 | 2021-05-18T23:12:14.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | af-ai-center | null | af-ai-center/bert-base-swedish-uncased | 202 | null | transformers | 3,608 | Entry not found |
suhasjain/DailoGPT-small-harrypotter | 27baccd28f5226a79ac26499d6ed6e8feeaba056 | 2021-09-10T08:47:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | suhasjain | null | suhasjain/DailoGPT-small-harrypotter | 202 | null | transformers | 3,609 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
yoshitomo-matsubara/bert-base-uncased-mrpc | 6a95d3c0ae14ab6bf2dccfaaa5bf6ee16b2a5172 | 2021-05-29T21:47:37.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mrpc",
"transformers",
"mrpc",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-base-uncased-mrpc | 202 | null | transformers | 3,610 | ---
language: en
tags:
- bert
- mrpc
- glue
- torchdistill
license: apache-2.0
datasets:
- mrpc
metrics:
- f1
- accuracy
---
`bert-base-uncased` fine-tuned on MRPC dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
deepset/gelectra-base-generator | ba9ac3432c2c9904460f59343f1fb80d4c0a0740 | 2021-10-21T12:18:27.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:wikipedia",
"dataset:OPUS",
"dataset:OpenLegalData",
"arxiv:2010.10906",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | deepset | null | deepset/gelectra-base-generator | 201 | 3 | transformers | 3,611 | ---
language: de
license: mit
datasets:
- wikipedia
- OPUS
- OpenLegalData
---
# German ELECTRA base generator
Released, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model.
The generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-base.
## Overview
**Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf)
**Architecture:** ELECTRA base (generator)
**Language:** German
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Stefan Schweter: `stefan [at] schweter.eu`
Timo Möller: `timo.moeller [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
hakurei/lit-125M | f1e7a551a28ee8ad2e982d5425a07e1d59b4be65 | 2022-02-17T22:52:19.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"causal-lm",
"license:mit"
] | text-generation | false | hakurei | null | hakurei/lit-125M | 201 | null | transformers | 3,612 | ---
language:
- en
tags:
- pytorch
- causal-lm
license: mit
---
# Lit-125M - A Small Fine-tuned Model For Fictional Storytelling
Lit-125M is a GPT-Neo 125M model fine-tuned on 2GB of a diverse range of light novels, erotica, and annotated literature for the purpose of generating novel-like fictional text.
## Model Description
The model used for fine-tuning is [GPT-Neo 125M](https://huggingface.co/EleutherAI/gpt-neo-125M), which is a 125 million parameter auto-regressive language model trained on [The Pile](https://pile.eleuther.ai/)..
## Training Data & Annotative Prompting
The data used in fine-tuning has been gathered from various sources such as the [Gutenberg Project](https://www.gutenberg.org/). The annotated fiction dataset has prepended tags to assist in generating towards a particular style. Here is an example prompt that shows how to use the annotations.
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror; Tags: 3rdperson, scary; Style: Dark ]
***
When a traveler in north central Massachusetts takes the wrong fork...
```
The annotations can be mixed and matched to help generate towards a specific style.
## Downstream Uses
This model can be used for entertainment purposes and as a creative writing assistant for fiction writers. The small size of the model can also help for easy debugging or further development of other models with a similar purpose.
## Example Code
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('hakurei/lit-125M')
tokenizer = AutoTokenizer.from_pretrained('hakurei/lit-125M')
prompt = '''[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
```
An example output from this code produces a result that will look similar to:
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler takes a trip through the streets of the world, the traveler feels like a youkai with a whole world inside her mind. It can be very scary for a youkai. When someone goes in the opposite direction and knocks on your door, it is actually the first time you have ever come to investigate something like that.
That's right: everyone has heard stories about youkai, right? If you have heard them, you know what I'm talking about.
It's hard not to say you
```
## Team members and Acknowledgements
- [Anthony Mercurio](https://github.com/harubaru)
- Imperishable_NEET |
huggingface/funnel-small | 3829c82f3a863632e1726bfe4ecf950a726689df | 2020-08-31T21:06:56.000Z | [
"pytorch",
"funnel",
"feature-extraction",
"transformers"
] | feature-extraction | false | huggingface | null | huggingface/funnel-small | 201 | null | transformers | 3,613 | Entry not found |
hyunwoongko/blenderbot-9B | 20049263e5130cc21015f3327085fb3adbc39939 | 2021-06-17T01:26:34.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"en",
"dataset:blended_skill_talk",
"arxiv:1907.06616",
"transformers",
"convAI",
"conversational",
"facebook",
"license:apache-2.0",
"autotrain_compatible"
] | conversational | false | hyunwoongko | null | hyunwoongko/blenderbot-9B | 201 | 5 | transformers | 3,614 | ---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
## Model description
+ Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
lisaterumi/genia-biobert-ent | 929329e6c4611a8add9409d97e0727ea8cfe9e63 | 2022-07-22T21:40:15.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:Genia",
"transformers",
"autotrain_compatible"
] | token-classification | false | lisaterumi | null | lisaterumi/genia-biobert-ent | 201 | null | transformers | 3,615 | ---
language: "en"
widget:
- text: "Point mutation of a GATA-1 site at -230 reduced promoter activity by 37%."
- text: "Electrophoretic mobility shift assays indicated that the -230 GATA-1 site has a relatively low affinity for GATA-1."
- text: "Accordingly, the effects of the constitutively active PKCs were compared to the effects of mutationally activated p21ras."
- text: "Activated Src and p21ras were able to induce CD69 expression."
datasets:
- Genia
---
# Genia-BioBERT-ENT
## Citation
```
coming soon
```
|
xlm-mlm-17-1280 | ed2e1c862c37217e1b185c33a282ed8f3ebdc3e2 | 2022-07-22T08:09:41.000Z | [
"pytorch",
"tf",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"sv",
"pl",
"ru",
"ar",
"tr",
"zh",
"ja",
"ko",
"hi",
"vi",
"arxiv:1901.07291",
"arxiv:1911.02116",
"arxiv:1910.09700",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | fill-mask | false | null | null | xlm-mlm-17-1280 | 200 | 1 | transformers | 3,616 | ---
language:
- multilingual
- en
- fr
- es
- de
- it
- pt
- nl
- sv
- pl
- ru
- ar
- tr
- zh
- ja
- ko
- hi
- vi
license: cc-by-nc-4.0
---
# xlm-mlm-17-1280
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
xlm-mlm-17-1280 is the XLM model, which was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau, trained on text in 17 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective.
## Model Description
- **Developed by:** See [associated paper](https://arxiv.org/abs/1901.07291) and [GitHub Repo](https://github.com/facebookresearch/XLM)
- **Model type:** Language model
- **Language(s) (NLP):** 17 languages, see [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for full list.
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-mlm-17-1280](https://huggingface.co/xlm-mlm-17-1280)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. Also see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
This model is the XLM model trained on text in 17 languages. The preprocessing included tokenization and byte-pair-encoding. See the [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) and the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details on the training data and training procedure.
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the XNLI cross-lingual classification task (see the [XNLI data card](https://huggingface.co/datasets/xnli) for more details on XNLI) using the metric of test accuracy. See the [GitHub Repo](https://arxiv.org/pdf/1911.02116.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-17-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), and Chinese (zh):
|Language| en | es | de | ar | zh |
|:------:|:--:|:---:|:--:|:--:|:--:|
| |84.8|79.4 |76.2|71.5|75 |
See the [GitHub repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
[Conneau et al. (2020)](https://arxiv.org/pdf/1911.02116.pdf) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7).
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. See the [ipython notebook](https://github.com/facebookresearch/XLM/blob/main/generate-embeddings.ipynb) in the associated [GitHub repo](https://github.com/facebookresearch/XLM#the-17-and-100-languages) for examples. |
Helsinki-NLP/opus-mt-en-bg | f3d8448586d3626a38582ec5ff9e61f7d5940255 | 2021-01-18T08:05:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"bg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-bg | 200 | null | transformers | 3,617 | ---
language:
- en
- bg
tags:
- translation
license: apache-2.0
---
### eng-bul
* source group: English
* target group: Bulgarian
* OPUS readme: [eng-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md)
* model: transformer
* source language(s): eng
* target language(s): bul bul_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.bul | 50.6 | 0.680 |
### System Info:
- hf_name: eng-bul
- source_languages: eng
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'bg']
- src_constituents: {'eng'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt
- src_alpha3: eng
- tgt_alpha3: bul
- short_pair: en-bg
- chrF2_score: 0.68
- bleu: 50.6
- brevity_penalty: 0.96
- ref_len: 69504.0
- src_name: English
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: en
- tgt_alpha2: bg
- prefer_old: False
- long_pair: eng-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gem-en | a0433c2234bd6b0ee27e493c1d47ba46322312bc | 2021-01-18T08:51:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"en",
"lb",
"yi",
"gem",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gem-en | 200 | 1 | transformers | 3,618 | ---
language:
- da
- sv
- af
- nn
- fy
- fo
- de
- nb
- nl
- is
- en
- lb
- yi
- gem
tags:
- translation
license: apache-2.0
---
### gem-eng
* source group: Germanic languages
* target group: English
* OPUS readme: [gem-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md)
* model: transformer
* source language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 27.2 | 0.542 |
| news-test2008-deueng.deu.eng | 26.3 | 0.536 |
| newstest2009-deueng.deu.eng | 25.1 | 0.531 |
| newstest2010-deueng.deu.eng | 28.3 | 0.569 |
| newstest2011-deueng.deu.eng | 26.0 | 0.543 |
| newstest2012-deueng.deu.eng | 26.8 | 0.550 |
| newstest2013-deueng.deu.eng | 30.2 | 0.570 |
| newstest2014-deen-deueng.deu.eng | 30.7 | 0.574 |
| newstest2015-ende-deueng.deu.eng | 32.1 | 0.581 |
| newstest2016-ende-deueng.deu.eng | 36.9 | 0.624 |
| newstest2017-ende-deueng.deu.eng | 32.8 | 0.588 |
| newstest2018-ende-deueng.deu.eng | 40.2 | 0.640 |
| newstest2019-deen-deueng.deu.eng | 36.8 | 0.614 |
| Tatoeba-test.afr-eng.afr.eng | 62.8 | 0.758 |
| Tatoeba-test.ang-eng.ang.eng | 10.5 | 0.262 |
| Tatoeba-test.dan-eng.dan.eng | 61.6 | 0.754 |
| Tatoeba-test.deu-eng.deu.eng | 49.7 | 0.665 |
| Tatoeba-test.enm-eng.enm.eng | 23.9 | 0.491 |
| Tatoeba-test.fao-eng.fao.eng | 23.4 | 0.446 |
| Tatoeba-test.frr-eng.frr.eng | 10.2 | 0.184 |
| Tatoeba-test.fry-eng.fry.eng | 29.6 | 0.486 |
| Tatoeba-test.gos-eng.gos.eng | 17.8 | 0.352 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.058 |
| Tatoeba-test.gsw-eng.gsw.eng | 15.3 | 0.333 |
| Tatoeba-test.isl-eng.isl.eng | 51.0 | 0.669 |
| Tatoeba-test.ksh-eng.ksh.eng | 6.7 | 0.266 |
| Tatoeba-test.ltz-eng.ltz.eng | 33.0 | 0.505 |
| Tatoeba-test.multi.eng | 54.0 | 0.687 |
| Tatoeba-test.nds-eng.nds.eng | 33.6 | 0.529 |
| Tatoeba-test.nld-eng.nld.eng | 58.9 | 0.733 |
| Tatoeba-test.non-eng.non.eng | 37.3 | 0.546 |
| Tatoeba-test.nor-eng.nor.eng | 54.9 | 0.696 |
| Tatoeba-test.pdc-eng.pdc.eng | 29.6 | 0.446 |
| Tatoeba-test.sco-eng.sco.eng | 40.5 | 0.581 |
| Tatoeba-test.stq-eng.stq.eng | 14.5 | 0.361 |
| Tatoeba-test.swe-eng.swe.eng | 62.0 | 0.745 |
| Tatoeba-test.swg-eng.swg.eng | 17.1 | 0.334 |
| Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.400 |
### System Info:
- hf_name: gem-eng
- source_languages: gem
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
- src_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt
- src_alpha3: gem
- tgt_alpha3: eng
- short_pair: gem-en
- chrF2_score: 0.687
- bleu: 54.0
- brevity_penalty: 0.993
- ref_len: 72120.0
- src_name: Germanic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: gem
- tgt_alpha2: en
- prefer_old: False
- long_pair: gem-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-pa-en | e501d2bca4de5b63384fcdafc6b6185433efb4a9 | 2021-09-10T14:00:06.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pa",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pa-en | 200 | null | transformers | 3,619 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pa-en
* source languages: pa
* target languages: en
* OPUS readme: [pa-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pa-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pa.en | 20.6 | 0.320 |
| Tatoeba.pa.en | 29.3 | 0.464 |
|
ShannonAI/ChineseBERT-base | aa8b6fa9c3427f77b0911b07ab35f2b1b8bf248b | 2022-06-19T08:14:46.000Z | [
"pytorch",
"arxiv:2106.16038"
] | null | false | ShannonAI | null | ShannonAI/ChineseBERT-base | 200 | 6 | null | 3,620 | # ChineseBERT-base
This repository contains code, model, dataset for **ChineseBERT** at ACL2021.
paper:
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/abs/2106.16038)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
code:
[ChineseBERT github link](https://github.com/ShannonAI/ChineseBert)
## Model description
We propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese
characters into language model pretraining.
First, for each Chinese character, we get three kind of embedding.
- **Char Embedding:** the same as origin BERT token embedding.
- **Glyph Embedding:** capture visual features based on different fonts of a Chinese character.
- **Pinyin Embedding:** capture phonetic feature from the pinyin sequence ot a Chinese Character.
Then, char embedding, glyph embedding and pinyin embedding
are first concatenated, and mapped to a D-dimensional embedding through a fully
connected layer to form the fusion embedding.
Finally, the fusion embedding is added with the position embedding, which is fed as input to the BERT model.
The following image shows an overview architecture of ChineseBERT model.

ChineseBERT leverages the glyph and pinyin information of Chinese
characters to enhance the model's ability of capturing
context semantics from surface character forms and
disambiguating polyphonic characters in Chinese. |
facebook/wav2vec2-large-xlsr-53-dutch | ced00f8603b017d58cb3bcb3883e8c28f19ccdb4 | 2021-07-06T02:35:35.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:common_voice",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-xlsr-53-dutch | 200 | null | transformers | 3,621 | ---
language: nl
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---
## Evaluation on Common Voice NL Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "facebook/wav2vec2-large-xlsr-53-dutch"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "nl", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 21.1 % |
maxidl/wav2vec2-large-xlsr-german | 893c0ff874902327f50822f871ff78e4f584fbf5 | 2021-07-06T12:32:21.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | maxidl | null | maxidl/wav2vec2-large-xlsr-german | 200 | null | transformers | 3,622 | ---
language: de
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: {XLSR Wav2Vec2 Large 53 CV-de}
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 12.77
---
# Wav2Vec2-Large-XLSR-53-German
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "de", split="test[:8]") # use a batch of 8 for demo purposes
processor = Wav2Vec2Processor.from_pretrained("maxidl/wav2vec2-large-xlsr-german")
model = Wav2Vec2ForCTC.from_pretrained("maxidl/wav2vec2-large-xlsr-german")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
"""
Preprocessing the dataset by:
- loading audio files
- resampling to 16kHz
- converting to array
- prepare input tensor using the processor
"""
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
# run forward
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"])
"""
Example Result:
Prediction: [
'zieh durch bittet draußen die schuhe aus',
'es kommt zugvorgebauten fo',
'ihre vorterstrecken erschienen it modemagazinen wie der voge karpes basar mariclair',
'fürliepert eine auch für manachen ungewöhnlich lange drittelliste',
'er wurde zu ehren des reichskanzlers otto von bismarck errichtet',
'was solls ich bin bereit',
'das internet besteht aus vielen computern die miteinander verbunden sind',
'der uranus ist der siebinteplanet in unserem sonnensystem s'
]
Reference: [
'Zieht euch bitte draußen die Schuhe aus.',
'Es kommt zum Showdown in Gstaad.',
'Ihre Fotostrecken erschienen in Modemagazinen wie der Vogue, Harper’s Bazaar und Marie Claire.',
'Felipe hat eine auch für Monarchen ungewöhnlich lange Titelliste.',
'Er wurde zu Ehren des Reichskanzlers Otto von Bismarck errichtet.',
'Was solls, ich bin bereit.',
'Das Internet besteht aus vielen Computern, die miteinander verbunden sind.',
'Der Uranus ist der siebente Planet in unserem Sonnensystem.'
]
"""
```
## Evaluation
The model can be evaluated as follows on the German test data of Common Voice:
```python
import re
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
"""
Evaluation on the full test set:
- takes ~20mins (RTX 3090).
- requires ~170GB RAM to compute the WER. Below, we use a chunked implementation of WER to avoid large RAM consumption.
"""
test_dataset = load_dataset("common_voice", "de", split="test") # use "test[:1%]" for 1% sample
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("maxidl/wav2vec2-large-xlsr-german")
model = Wav2Vec2ForCTC.from_pretrained("maxidl/wav2vec2-large-xlsr-german")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8) # batch_size=8 -> requires ~14.5GB GPU memory
# non-chunked version:
# print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
# WER: 12.900291
# Chunked version, see https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/5:
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
print("Total (chunk_size=1000), WER: {:2f}".format(100 * chunked_wer(result["pred_strings"], result["sentence"], chunk_size=1000)))
# Total (chunk=1000), WER: 12.768981
```
**Test Result**: WER: 12.77 %
## Training
The Common Voice German `train` and `validation` were used for training.
The script used for training can be found [here](https://github.com/maxidl/wav2vec2).
The model was trained for 50k steps, taking around 30 hours on a single A100.
The arguments used for training this model are:
```
python run_finetuning.py \\
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \\
--dataset_config_name="de" \\
--output_dir=./wav2vec2-large-xlsr-german \\
--preprocessing_num_workers="16" \\
--overwrite_output_dir \\
--num_train_epochs="20" \\
--per_device_train_batch_size="64" \\
--per_device_eval_batch_size="32" \\
--learning_rate="1e-4" \\
--warmup_steps="500" \\
--evaluation_strategy="steps" \\
--save_steps="5000" \\
--eval_steps="5000" \\
--logging_steps="1000" \\
--save_total_limit="3" \\
--freeze_feature_extractor \\
--activation_dropout="0.055" \\
--attention_dropout="0.094" \\
--feat_proj_dropout="0.04" \\
--layerdrop="0.04" \\
--mask_time_prob="0.08" \\
--gradient_checkpointing="1" \\
--fp16 \\
--do_train \\
--do_eval \\
--dataloader_num_workers="16" \\
--group_by_length
```
|
pierrerappolt/cart | e1b8e44a537f4a3355859982f3327d4478eb4797 | 2022-03-27T02:53:44.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:squadv2",
"transformers",
"html",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | pierrerappolt | null | pierrerappolt/cart | 200 | null | transformers | 3,623 | ---
language: en
tags:
- html
license: apache-2.0
datasets:
- squadv2
inference:
parameters:
handle_impossible_answer: true
---
Txt |
facebook/deit-tiny-distilled-patch16-224 | 488ad407f8f6bcfc79829ece3507658c46d15563 | 2022-07-13T11:41:55.000Z | [
"pytorch",
"tf",
"deit",
"image-classification",
"dataset:imagenet",
"arxiv:2012.12877",
"arxiv:2006.03677",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/deit-tiny-distilled-patch16-224 | 199 | 1 | transformers | 3,624 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# Distilled Data-efficient Image Transformer (tiny-sized model)
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-tiny-distilled-patch16-224')
model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-tiny-distilled-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 |
| **DeiT-tiny distilled** | **74.5** | **91.9** | **6M** | **https://huggingface.co/facebook/deit-tiny-distilled-patch16-224** |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
liaad/srl-en_xlmr-large | 93bf9d8074ab8794d705a1774ec2e7f33e25919c | 2021-09-22T08:56:14.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"arxiv:2101.01213",
"transformers",
"xlm-roberta-large",
"semantic role labeling",
"finetuned",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/srl-en_xlmr-large | 199 | null | transformers | 3,625 | ---
language:
- multilingual
- pt
- en
tags:
- xlm-roberta-large
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# XLM-R large fine-tuned on English semantic role labeling
## Model description
This model is the [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) fine-tuned on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-en_xlmr-large")
model = AutoModel.from_pretrained("liaad/srl-en_xlmr-large")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- This model does not include a Tensorflow version. This is because the "type_vocab_size" in this model was changed (from 1 to 2) and, therefore, it cannot be easily converted to Tensorflow.
- The models were trained only for 5 epochs.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The models were trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
facebook/regnet-y-320-seer | 1b737c5ac08d98ca903d2c1d87d2c83e95df054e | 2022-06-30T10:21:14.000Z | [
"pytorch",
"tf",
"regnet",
"feature-extraction",
"arxiv:2202.08360",
"transformers",
"vision",
"license:apache-2.0"
] | feature-extraction | false | facebook | null | facebook/regnet-y-320-seer | 199 | null | transformers | 3,626 | ---
license: apache-2.0
tags:
- vision
widgets:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNetModel
RegNetModel model was introduced in the paper [Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision](https://arxiv.org/abs/2202.08360) and first released in [this repository](https://github.com/facebookresearch/vissl/tree/main/projects/SEER).
Disclaimer: The team releasing RegNetModel did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors trained [RegNets](https://huggingface.co/?models=regnet) models in a self-supervised fashion on bilion of random images from the internet

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetModel.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 1088, 7, 7]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
Chemsseddine/bert2gpt2SUMM-finetuned-mlsum | 21c2f20b157fac797f5b45eb23163c1a5f8c4d59 | 2022-06-30T19:54:06.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"fr",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chemsseddine | null | Chemsseddine/bert2gpt2SUMM-finetuned-mlsum | 199 | null | transformers | 3,627 | ---
language: fr
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert2gpt2_med_ml_orange_summ-finetuned_med_sum_new-finetuned_med_sum_new
results: []
---
<img src="https://huggingface.co/Chemsseddine/bert2gpt2_med_ml_orange_summ-finetuned_med_sum_new-finetuned_med_sum_new/resolve/main/logobert2gpt2.png" alt="Map of positive probabilities per country." width="200"/>
# bert2gpt2SUMM-finetuned-mlsum
---
## This model is used for french summarization
- Problem type: Summarization
- Model ID: 980832493
- CO2 Emissions (in grams): 0.10685501288084795
This model is a fine-tuned version of [Chemsseddine/bert2gpt2SUMM](https://huggingface.co/Chemsseddine/bert2gpt2SUMM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.03749418258667
- Rouge1: 28.8384
- Rouge2: 10.7511
- RougeL: 27.0842
- RougeLsum: 27.5118
- Gen Len: 22.0625
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 33199 | 4.03749 | 28.8384 | 10.7511 | 27.0842 | 27.5118 | 22.0625 |
|
justheuristic/test-bloomd-350m | e839f6d8cfb0061ee1100c3ddde71db8a7576f6c | 2022-07-07T01:20:37.000Z | [
"pytorch",
"bloom",
"transformers"
] | null | false | justheuristic | null | justheuristic/test-bloomd-350m | 199 | null | transformers | 3,628 | Entry not found |
Helsinki-NLP/opus-mt-yo-en | 94685c1b0eb549569c394538ccd482b6132321a0 | 2021-09-11T10:52:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"yo",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-yo-en | 198 | null | transformers | 3,629 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yo-en
* source languages: yo
* target languages: en
* OPUS readme: [yo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.en | 33.8 | 0.496 |
|
aware-ai/byt5-german-grammar | 8ab29798880b659a170a97a3ec9626a28e46ed3f | 2021-06-23T12:20:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"de",
"transformers",
"grammar",
"autotrain_compatible"
] | text2text-generation | false | aware-ai | null | aware-ai/byt5-german-grammar | 198 | 1 | transformers | 3,630 | ---
language: de
tags:
- grammar
widget:
- text: "correct german grammar: es ist schön so viele tolle menschen um sich zu haben denn ohne sie wäre es nicht so schön"
---
example outputs:
input: ich liebe das leben --> output: Ich liebe das Leben.
input: es ist schön so viele tolle menschen um sich zu haben denn ohne sie wäre es nicht so schön --> output: Es ist schön, so viele tolle Menschen, um sich zu haben, denn ohne sie wäre es nicht so schön.
input: der kunde hat ausdrücklich nach dirk verlangt weil er den rabatt haben möchte --> output: Der Kunde hat ausdrücklich nach Dirk verlangt, weil er den Rabatt haben möchte.
the data can be prepared like this:
the broken_text is used as input, while the text is the output
```python
import re
import phonetics
import random
chars_to_ignore_regex = "[^A-Za-z0-9\ö\ä\ü\Ö\Ä\Ü\ß\-,;.:?! ]+"
broken_chars_to_ignore_regex = "[^A-Za-z0-9\ö\ä\ü\Ö\Ä\Ü\ß\- ]+"
def do_manipulation(string):
text = re.sub(chars_to_ignore_regex, '', string)
broken_text = re.sub(broken_chars_to_ignore_regex, "", text.lower())
if(random.randint(0,100) >= 50):
for xyz in range(int(len(broken_text.split(" "))/4)):
if(random.randint(0,100) > 30):
randc = random.choice(broken_text.split(" "))
if(random.randint(0,10) > 4):
broken_text = broken_text.replace(randc, ''.join(random.choice('abcdefghijklmnopqrstuvxyz') for _ in range(len(randc))).lower())
else:
broken_text = broken_text.replace(randc, phonetics.metaphone(randc).lower())
return text, broken_text
```
|
persiannlp/mt5-large-parsinlu-translation_en_fa | 53fec354caa45718befa6f016a8f515a43ca8cd4 | 2021-09-23T16:20:29.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"machine-translation",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-large-parsinlu-translation_en_fa | 198 | null | transformers | 3,631 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (English -> Persian).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;")
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?")
```
which should output:
```
['خدا را شکر که آفریننده و نگهدار جهان است.']
['خود را با کفن سفید می پوشد و به شکل برادرانه ای در']
['او از همه ی وبلاگ نویسان و سازمان هایی که از او حمایت کردند']
['مسابقات بین آوریل و دسامبر در فرودگاه والی عبدین نزدیک بی']
['من می خواهم پایان نامه دکتری را در رشته علوم کامپیوتر در']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
sismetanin/xlm_roberta_base-ru-sentiment-rusentiment | 611b0acab81780953a2477cf1e17ddc4429b396f | 2021-02-25T23:57:49.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_base-ru-sentiment-rusentiment | 198 | null | transformers | 3,632 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XML-RoBERTa-Base-ru-sentiment-RuSentiment
XML-RoBERTa-Base-ru-sentiment-RuSentiment is a [XML-RoBERTa-Base](https://huggingface.co/xlm-roberta-base) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` |
nlpaueb/sec-bert-shape | 9829e591d1c801f9dcd887365d97da179a71b53e | 2022-04-28T14:46:51.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2203.06482",
"transformers",
"finance",
"financial",
"license:cc-by-sa-4.0",
"fill-mask"
] | fill-mask | false | nlpaueb | null | nlpaueb/sec-bert-shape | 198 | 6 | transformers | 3,633 | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/0yz81K9/sec-bert-logo.png
tags:
- finance
- financial
widget:
- text: "Total net sales decreased [MASK]% or $[X.X] billion during [XXXX] compared to [XXXX]"
- text: "Total net sales decreased [X]% or $[MASK] billion during [XXXX] compared to [XXXX]."
- text: "Total net sales decreased [X]% or $[X.X] billion during [MASK] compared to [XXXX]."
- text: "During [MASK], the Company repurchased $[XX.X] billion of its common stock and paid dividend equivalents of $[XX.X] billion."
- text: "During 2019, the Company repurchased $[MASK] billion of its common stock and paid dividend equivalents of $[XX.X] billion."
---
# SEC-BERT
<img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="sec-bert-logo" width="400"/>
<div style="text-align: justify">
SEC-BERT is a family of BERT models for the financial domain, intended to assist financial NLP research and FinTech applications.
SEC-BERT consists of the following models:
* [**SEC-BERT-BASE**](https://huggingface.co/nlpaueb/sec-bert-base): Same architecture as BERT-BASE trained on financial documents.
* [**SEC-BERT-NUM**](https://huggingface.co/nlpaueb/sec-bert-num): Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation)
* **SEC-BERT-SHAPE** (this model): Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.
</div>
## Pre-training corpus
The model was pre-trained on 260,773 10-K filings from 1993-2019, publicly available at <a href="https://www.sec.gov/">U.S. Securities and Exchange Commission (SEC)</a>
## Pre-training details
<div style="text-align: justify">
* We created a new vocabulary of 30k subwords by training a [BertWordPieceTokenizer](https://github.com/huggingface/tokenizers) from scratch on the pre-training corpus.
* We trained BERT using the official code provided in [Google BERT's GitHub repository](https://github.com/google-research/bert)</a>.
* We then used [Hugging Face](https://huggingface.co)'s [Transformers](https://github.com/huggingface/transformers) conversion script to convert the TF checkpoint in the desired format in order to be able to load the model in two lines of code for both PyTorch and TF2 users.
* We release a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TRC)](https://sites.research.google/trc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
</div>
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-shape")
model = AutoModel.from_pretrained("nlpaueb/sec-bert-shape")
```
## Pre-process Text
<div style="text-align: justify">
To use SEC-BERT-SHAPE, you have to pre-process texts replacing every numerical token with the corresponding shape pseudo-token, from a list of 214 predefined shape pseudo-tokens. If the numerical token does not correspond to any shape pseudo-token we replace it with the [NUM] pseudo-token.
Below there is an example of how you can pre-process a simple sentence. This approach is quite simple; feel free to modify it as you see fit.
</div>
```python
import re
import spacy
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-shape")
spacy_tokenizer = spacy.load("en_core_web_sm")
sentence = "Total net sales decreased 2% or $5.4 billion during 2019 compared to 2018."
def sec_bert_shape_preprocess(text):
tokens = [t.text for t in spacy_tokenizer(sentence)]
processed_text = []
for token in tokens:
if re.fullmatch(r"(\d+[\d,.]*)|([,.]\d+)", token):
shape = '[' + re.sub(r'\d', 'X', token) + ']'
if shape in tokenizer.additional_special_tokens:
processed_text.append(shape)
else:
processed_text.append('[NUM]')
else:
processed_text.append(token)
return ' '.join(processed_text)
tokenized_sentence = tokenizer.tokenize(sec_bert_shape_preprocess(sentence))
print(tokenized_sentence)
"""
['total', 'net', 'sales', 'decreased', '[X]', '%', 'or', '$', '[X.X]', 'billion', 'during', '[XXXX]', 'compared', 'to', '[XXXX]', '.']
"""
```
## Using SEC-BERT variants as Language Models
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales [MASK] 2% or $5.4 billion during 2019 compared to 2018. | decreased
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | increased (0.221), were (0.131), are (0.103), rose (0.075), of (0.058)
| **SEC-BERT-BASE** | increased (0.678), decreased (0.282), declined (0.017), grew (0.016), rose (0.004)
| **SEC-BERT-NUM** | increased (0.753), decreased (0.211), grew (0.019), declined (0.010), rose (0.006)
| **SEC-BERT-SHAPE** | increased (0.747), decreased (0.214), grew (0.021), declined (0.013), rose (0.002)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 [MASK] during 2019 compared to 2018. | billion
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | billion (0.841), million (0.097), trillion (0.028), ##m (0.015), ##bn (0.006)
| **SEC-BERT-BASE** | million (0.972), billion (0.028), millions (0.000), ##million (0.000), m (0.000)
| **SEC-BERT-NUM** | million (0.974), billion (0.012), , (0.010), thousand (0.003), m (0.000)
| **SEC-BERT-SHAPE** | million (0.978), billion (0.021), % (0.000), , (0.000), millions (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased [MASK]% or $5.4 billion during 2019 compared to 2018. | 2
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 20 (0.031), 10 (0.030), 6 (0.029), 4 (0.027), 30 (0.027)
| **SEC-BERT-BASE** | 13 (0.045), 12 (0.040), 11 (0.040), 14 (0.035), 10 (0.035)
| **SEC-BERT-NUM** | [NUM] (1.000), one (0.000), five (0.000), three (0.000), seven (0.000)
| **SEC-BERT-SHAPE** | [XX] (0.316), [XX.X] (0.253), [X.X] (0.237), [X] (0.188), [X.XX] (0.002)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2[MASK] or $5.4 billion during 2019 compared to 2018. | %
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | % (0.795), percent (0.174), ##fold (0.009), billion (0.004), times (0.004)
| **SEC-BERT-BASE** | % (0.924), percent (0.076), points (0.000), , (0.000), times (0.000)
| **SEC-BERT-NUM** | % (0.882), percent (0.118), million (0.000), units (0.000), bps (0.000)
| **SEC-BERT-SHAPE** | % (0.961), percent (0.039), bps (0.000), , (0.000), bcf (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $[MASK] billion during 2019 compared to 2018. | 5.4
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 1 (0.074), 4 (0.045), 3 (0.044), 2 (0.037), 5 (0.034)
| **SEC-BERT-BASE** | 1 (0.218), 2 (0.136), 3 (0.078), 4 (0.066), 5 (0.048)
| **SEC-BERT-NUM** | [NUM] (1.000), l (0.000), 1 (0.000), - (0.000), 30 (0.000)
| **SEC-BERT-SHAPE** | [X.X] (0.787), [X.XX] (0.095), [XX.X] (0.049), [X.XXX] (0.046), [X] (0.013)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 billion during [MASK] compared to 2018. | 2019
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 2017 (0.485), 2018 (0.169), 2016 (0.164), 2015 (0.070), 2014 (0.022)
| **SEC-BERT-BASE** | 2019 (0.990), 2017 (0.007), 2018 (0.003), 2020 (0.000), 2015 (0.000)
| **SEC-BERT-NUM** | [NUM] (1.000), as (0.000), fiscal (0.000), year (0.000), when (0.000)
| **SEC-BERT-SHAPE** | [XXXX] (1.000), as (0.000), year (0.000), periods (0.000), , (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 billion during 2019 compared to [MASK]. | 2018
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 2017 (0.100), 2016 (0.097), above (0.054), inflation (0.050), previously (0.037)
| **SEC-BERT-BASE** | 2018 (0.999), 2019 (0.000), 2017 (0.000), 2016 (0.000), 2014 (0.000)
| **SEC-BERT-NUM** | [NUM] (1.000), year (0.000), last (0.000), sales (0.000), fiscal (0.000)
| **SEC-BERT-SHAPE** | [XXXX] (1.000), year (0.000), sales (0.000), prior (0.000), years (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company [MASK] $67.1 billion of its common stock and paid dividend equivalents of $14.1 billion. | repurchased
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | held (0.229), sold (0.192), acquired (0.172), owned (0.052), traded (0.033)
| **SEC-BERT-BASE** | repurchased (0.913), issued (0.036), purchased (0.029), redeemed (0.010), sold (0.003)
| **SEC-BERT-NUM** | repurchased (0.917), purchased (0.054), reacquired (0.013), issued (0.005), acquired (0.003)
| **SEC-BERT-SHAPE** | repurchased (0.902), purchased (0.068), issued (0.010), reacquired (0.008), redeemed (0.006)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common [MASK] and paid dividend equivalents of $14.1 billion. | stock
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | stock (0.835), assets (0.039), equity (0.025), debt (0.021), bonds (0.017)
| **SEC-BERT-BASE** | stock (0.857), shares (0.135), equity (0.004), units (0.002), securities (0.000)
| **SEC-BERT-NUM** | stock (0.842), shares (0.157), equity (0.000), securities (0.000), units (0.000)
| **SEC-BERT-SHAPE** | stock (0.888), shares (0.109), equity (0.001), securities (0.001), stocks (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common stock and paid [MASK] equivalents of $14.1 billion. | dividend
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | cash (0.276), net (0.128), annual (0.083), the (0.040), debt (0.027)
| **SEC-BERT-BASE** | dividend (0.890), cash (0.018), dividends (0.016), share (0.013), tax (0.010)
| **SEC-BERT-NUM** | dividend (0.735), cash (0.115), share (0.087), tax (0.025), stock (0.013)
| **SEC-BERT-SHAPE** | dividend (0.655), cash (0.248), dividends (0.042), share (0.019), out (0.003)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common stock and paid dividend [MASK] of $14.1 billion. | equivalents
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | revenue (0.085), earnings (0.078), rates (0.065), amounts (0.064), proceeds (0.062)
| **SEC-BERT-BASE** | payments (0.790), distributions (0.087), equivalents (0.068), cash (0.013), amounts (0.004)
| **SEC-BERT-NUM** | payments (0.845), equivalents (0.097), distributions (0.024), increases (0.005), dividends (0.004)
| **SEC-BERT-SHAPE** | payments (0.784), equivalents (0.093), distributions (0.043), dividends (0.015), requirements (0.009)
## Publication
<div style="text-align: justify">
If you use this model cite the following article:<br>
[**FiNER: Financial Numeric Entity Recognition for XBRL Tagging**](https://arxiv.org/abs/2203.06482)<br>
Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos and George Paliouras<br>
In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) (Long Papers), Dublin, Republic of Ireland, May 22 - 27, 2022
</div>
```
@inproceedings{loukas-etal-2022-finer,
title = {FiNER: Financial Numeric Entity Recognition for XBRL Tagging},
author = {Loukas, Lefteris and
Fergadiotis, Manos and
Chalkidis, Ilias and
Spyropoulou, Eirini and
Malakasiotis, Prodromos and
Androutsopoulos, Ion and
Paliouras George},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)},
publisher = {Association for Computational Linguistics},
location = {Dublin, Republic of Ireland},
year = {2022},
url = {https://arxiv.org/abs/2203.06482}
}
```
## About Us
<div style="text-align: justify">
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
</div>
[Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) |
AIDA-UPM/MSTSb_stsb-xlm-r-multilingual | eea01579b009257faffea3516c0da1f89f2d99ba | 2021-07-21T18:32:31.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | AIDA-UPM | null | AIDA-UPM/MSTSb_stsb-xlm-r-multilingual | 197 | null | sentence-transformers | 3,634 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1438 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 4e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Intel/bert-large-uncased-sparse-90-unstructured-pruneofa | 29c4f2d3eef81d3960cf87b214516a49398c09ed | 2022-01-13T12:13:38.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2111.05754",
"transformers",
"fill-mask"
] | fill-mask | false | Intel | null | Intel/bert-large-uncased-sparse-90-unstructured-pruneofa | 197 | null | transformers | 3,635 | ---
language: en
tags: fill-mask
datasets:
- wikipedia
- bookcorpus
---
# 90% Sparse BERT-Large (uncased) Prune OFA
This model is a result from our paper [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all). |
Norod78/hebrew-gpt_neo-xl | 56f66a33c04fc5b0499b0f143ec08abc7ca436a2 | 2022-07-04T13:09:16.000Z | [
"pytorch",
"jax",
"gpt_neo",
"text-generation",
"he",
"transformers",
"license:mit"
] | text-generation | false | Norod78 | null | Norod78/hebrew-gpt_neo-xl | 197 | 1 | transformers | 3,636 | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "עוד בימי קדם"
- text: "קוראים לי דורון ואני מעוניין ל"
- text: "קוראים לי איציק ואני חושב ש"
- text: "החתול שלך מאוד חמוד ו"
- text: "ובדרך ראינו שהגן"
license: mit
---
# hebrew-gpt_neo-xl
Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
## Datasets
1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ)
2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
3. CC100-Hebrew Dataset [Homepage](https://metatext.io/datasets/cc100-hebrew)
Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.
## Training Config
Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-xl/configs) <BR>
## Usage
### Google Colab Notebook
Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-xl/Norod78_hebrew_gpt_neo_xl_Colab.ipynb) <BR>
#### Simple usage sample code
```python
!pip install tokenizers==0.10.3 transformers==4.8.0
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl")
model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl", pad_token_id=tokenizer.eos_token_id)
prompt_text = "אני אוהב שוקולד ועוגות"
max_len = 512
sample_output_num = 3
seed = 1000
import numpy as np
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count()
print(f"device: {device}, n_gpu: {n_gpu}")
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(seed)
model.to(device)
encoded_prompt = tokenizer.encode(
prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to(device)
if encoded_prompt.size()[-1] == 0:
input_ids = None
else:
input_ids = encoded_prompt
print("input_ids = " + str(input_ids))
if input_ids != None:
max_len += len(encoded_prompt[0])
if max_len > 2048:
max_len = 2048
print("Updated max_len = " + str(max_len))
stop_token = "<|endoftext|>"
new_lines = "\
\
\
"
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=max_len,
top_k=50,
top_p=0.95,
num_return_sequences=sample_output_num
)
print(100 * '-' + "\
\t\tOutput\
" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
text = tokenizer.decode(sample_output, skip_special_tokens=True)
# Remove all text after the stop token
text = text[: text.find(stop_token) if stop_token else None]
# Remove all text after 3 newlines
text = text[: text.find(new_lines) if new_lines else None]
print("\
{}: {}".format(i, text))
print("\
" + 100 * '-')
```
|
comodoro/wav2vec2-xls-r-300m-cs-250 | 5d17dcbb516fa3532bb9b6ce4dc20d4779993142 | 2022-03-23T18:26:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"cs",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:ovm",
"dataset:pscr",
"dataset:vystadial2016",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | comodoro | null | comodoro/wav2vec2-xls-r-300m-cs-250 | 197 | null | transformers | 3,637 | ---
language:
- cs
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- xlsr-fine-tuning-week
datasets:
- mozilla-foundation/common_voice_8_0
- ovm
- pscr
- vystadial2016
model-index:
- name: Czech comodoro Wav2Vec2 XLSR 300M 250h data
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: cs
metrics:
- name: Test WER
type: wer
value: 7.3
- name: Test CER
type: cer
value: 2.1
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: cs
metrics:
- name: Test WER
type: wer
value: 43.44
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: cs
metrics:
- name: Test WER
type: wer
value: 38.5
---
# Czech wav2vec2-xls-r-300m-cs-250
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset as well as other datasets listed below.
It achieves the following results on the evaluation set:
- Loss: 0.1271
- Wer: 0.1475
- Cer: 0.0329
The `eval.py` script results using a LM are:
- WER: 0.07274312090176113
- CER: 0.021207369275558875
## Model description
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated using the attached `eval.py` script:
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-cs-250 --dataset mozilla-foundation/common-voice_8_0 --split test --config cs
```
## Training and evaluation data
The Common Voice 8.0 `train` and `validation` datasets were used for training, as well as the following datasets:
- Šmídl, Luboš and Pražák, Aleš, 2013, OVM – Otázky Václava Moravce, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-000D-EC98-3.
- Pražák, Aleš and Šmídl, Luboš, 2012, Czech Parliament Meetings, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-0005-CF9C-4.
- Plátek, Ondřej; Dušek, Ondřej and Jurčíček, Filip, 2016, Vystadial 2016 – Czech data, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11234/1-1740.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.4203 | 0.16 | 800 | 3.3148 | 1.0 | 1.0 |
| 2.8151 | 0.32 | 1600 | 0.8508 | 0.8938 | 0.2345 |
| 0.9411 | 0.48 | 2400 | 0.3335 | 0.3723 | 0.0847 |
| 0.7408 | 0.64 | 3200 | 0.2573 | 0.2840 | 0.0642 |
| 0.6516 | 0.8 | 4000 | 0.2365 | 0.2581 | 0.0595 |
| 0.6242 | 0.96 | 4800 | 0.2039 | 0.2433 | 0.0541 |
| 0.5754 | 1.12 | 5600 | 0.1832 | 0.2156 | 0.0482 |
| 0.5626 | 1.28 | 6400 | 0.1827 | 0.2091 | 0.0463 |
| 0.5342 | 1.44 | 7200 | 0.1744 | 0.2033 | 0.0468 |
| 0.4965 | 1.6 | 8000 | 0.1705 | 0.1963 | 0.0444 |
| 0.5047 | 1.76 | 8800 | 0.1604 | 0.1889 | 0.0422 |
| 0.4814 | 1.92 | 9600 | 0.1604 | 0.1827 | 0.0411 |
| 0.4471 | 2.09 | 10400 | 0.1566 | 0.1822 | 0.0406 |
| 0.4509 | 2.25 | 11200 | 0.1619 | 0.1853 | 0.0432 |
| 0.4415 | 2.41 | 12000 | 0.1513 | 0.1764 | 0.0397 |
| 0.4313 | 2.57 | 12800 | 0.1515 | 0.1739 | 0.0392 |
| 0.4163 | 2.73 | 13600 | 0.1445 | 0.1695 | 0.0377 |
| 0.4142 | 2.89 | 14400 | 0.1478 | 0.1699 | 0.0385 |
| 0.4184 | 3.05 | 15200 | 0.1430 | 0.1669 | 0.0376 |
| 0.3886 | 3.21 | 16000 | 0.1433 | 0.1644 | 0.0374 |
| 0.3795 | 3.37 | 16800 | 0.1426 | 0.1648 | 0.0373 |
| 0.3859 | 3.53 | 17600 | 0.1357 | 0.1604 | 0.0361 |
| 0.3762 | 3.69 | 18400 | 0.1344 | 0.1558 | 0.0349 |
| 0.384 | 3.85 | 19200 | 0.1379 | 0.1576 | 0.0359 |
| 0.3762 | 4.01 | 20000 | 0.1344 | 0.1539 | 0.0346 |
| 0.3559 | 4.17 | 20800 | 0.1339 | 0.1525 | 0.0351 |
| 0.3683 | 4.33 | 21600 | 0.1315 | 0.1518 | 0.0342 |
| 0.3572 | 4.49 | 22400 | 0.1307 | 0.1507 | 0.0342 |
| 0.3494 | 4.65 | 23200 | 0.1294 | 0.1491 | 0.0335 |
| 0.3476 | 4.81 | 24000 | 0.1287 | 0.1491 | 0.0336 |
| 0.3475 | 4.97 | 24800 | 0.1271 | 0.1475 | 0.0329 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
filco306/gpt2-switchboard-paraphraser | c12a7bd906a6d45768ed4860bc2b08a533c6e5dd | 2021-08-28T23:33:47.000Z | [
"pytorch",
"text-generation",
"arxiv:2010.05700",
"transformers"
] | text-generation | false | filco306 | null | filco306/gpt2-switchboard-paraphraser | 197 | null | transformers | 3,638 | # GPT2 Switchboard style transfer paraphraser
This is the trained Switchboard-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
``` |
neuropark/sahajBERT | e9159abbb5f8b352cc5ec59bababc66ba5382bb4 | 2021-06-24T16:49:26.000Z | [
"pytorch",
"albert",
"pretraining",
"bn",
"dataset:Wikipedia",
"dataset:Oscar",
"arxiv:1909.11942",
"transformers",
"collaborative",
"bengali",
"bangla",
"license:apache-2.0",
"fill-mask"
] | fill-mask | false | neuropark | null | neuropark/sahajBERT | 197 | 7 | transformers | 3,639 | ---
language: bn
tags:
- collaborative
- bengali
- albert
- bangla
license: apache-2.0
datasets:
- Wikipedia
- Oscar
widget:
- text: "জীবনে সবচেয়ে মূল্যবান জিনিস হচ্ছে [MASK]।"
pipeline_tag: fill-mask
---
# sahajBERT
<iframe width="100%" height="1100" frameborder="0"
src="https://observablehq.com/embed/@huggingface/participants-bubbles-chart?cells=c_noaws%2Ct_noaws%2Cviewof+currentDate"></iframe>
Collaboratively pre-trained model on Bengali language using masked language modeling (MLM) and Sentence Order Prediction (SOP) objectives.
## Model description
<!-- You can embed local or remote images using `` -->
sahajBERT is a model composed of 1) a tokenizer specially designed for Bengali and 2) an [ALBERT](https://arxiv.org/abs/1909.11942) architecture collaboratively pre-trained on a dump of Wikipedia in Bengali and the Bengali part of OSCAR.
<!-- Add more information about the collaborative training when we have time / preprint available -->
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering.
We trained our model on 2 of these downstream tasks: [sequence classification](https://huggingface.co/neuropark/sahajBERT-NCC) and [token classification](https://huggingface.co/neuropark/sahajBERT-NER)
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import AlbertForMaskedLM, FillMaskPipeline, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT")
# Initialize model
model = AlbertForMaskedLM.from_pretrained("neuropark/sahajBERT")
# Initialize pipeline
pipeline = FillMaskPipeline(tokenizer=tokenizer, model=model)
raw_text = "ধন্যবাদ। আপনার সাথে কথা [MASK] ভালো লাগলো" # Change me
pipeline(raw_text)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertModel, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT")
# Initialize model
model = AlbertModel.from_pretrained("neuropark/sahajBERT")
text = "ধন্যবাদ। আপনার সাথে কথা বলে ভালো লাগলো" # Change me
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
#### Limitations and bias
<!-- Provide examples of latent issues and potential remediations. -->
WIP
## Training data
The tokenizer was trained on he Bengali part of OSCAR and the model on a [dump of Wikipedia in Bengali](https://huggingface.co/datasets/lhoestq/wikipedia_bn) and the Bengali part of [OSCAR](https://huggingface.co/datasets/oscar).
## Training procedure
This model was trained in a collaborative manner by volunteer participants.
<!-- Add more information about the collaborative training when we have time / preprint available + Preprocessing, hardware used, hyperparameters... (maybe use figures)-->
### Contributors leaderboard
| Rank | Username | Total contributed runtime |
|:-------------:|:-------------:|-------------:|
| 1|[khalidsaifullaah](https://huggingface.co/khalidsaifullaah)|11 days 21:02:08|
| 2|[ishanbagchi](https://huggingface.co/ishanbagchi)|9 days 20:37:00|
| 3|[tanmoyio](https://huggingface.co/tanmoyio)|9 days 18:08:34|
| 4|[debajit](https://huggingface.co/debajit)|8 days 14:15:10|
| 5|[skylord](https://huggingface.co/skylord)|6 days 16:35:29|
| 6|[ibraheemmoosa](https://huggingface.co/ibraheemmoosa)|5 days 01:05:57|
| 7|[SaulLu](https://huggingface.co/SaulLu)|5 days 00:46:36|
| 8|[lhoestq](https://huggingface.co/lhoestq)|4 days 20:11:16|
| 9|[nilavya](https://huggingface.co/nilavya)|4 days 08:51:51|
|10|[Priyadarshan](https://huggingface.co/Priyadarshan)|4 days 02:28:55|
|11|[anuragshas](https://huggingface.co/anuragshas)|3 days 05:00:55|
|12|[sujitpal](https://huggingface.co/sujitpal)|2 days 20:52:33|
|13|[manandey](https://huggingface.co/manandey)|2 days 16:17:13|
|14|[albertvillanova](https://huggingface.co/albertvillanova)|2 days 14:14:31|
|15|[justheuristic](https://huggingface.co/justheuristic)|2 days 13:20:52|
|16|[w0lfw1tz](https://huggingface.co/w0lfw1tz)|2 days 07:22:48|
|17|[smoker](https://huggingface.co/smoker)|2 days 02:52:03|
|18|[Soumi](https://huggingface.co/Soumi)|1 days 20:42:02|
|19|[Anjali](https://huggingface.co/Anjali)|1 days 16:28:00|
|20|[OptimusPrime](https://huggingface.co/OptimusPrime)|1 days 09:16:57|
|21|[theainerd](https://huggingface.co/theainerd)|1 days 04:48:57|
|22|[yhn112](https://huggingface.co/yhn112)|0 days 20:57:02|
|23|[kolk](https://huggingface.co/kolk)|0 days 17:57:37|
|24|[arnab](https://huggingface.co/arnab)|0 days 17:54:12|
|25|[imavijit](https://huggingface.co/imavijit)|0 days 16:07:26|
|26|[osanseviero](https://huggingface.co/osanseviero)|0 days 14:16:45|
|27|[subhranilsarkar](https://huggingface.co/subhranilsarkar)|0 days 13:04:46|
|28|[sagnik1511](https://huggingface.co/sagnik1511)|0 days 12:24:57|
|29|[anindabitm](https://huggingface.co/anindabitm)|0 days 08:56:44|
|30|[borzunov](https://huggingface.co/borzunov)|0 days 04:07:35|
|31|[thomwolf](https://huggingface.co/thomwolf)|0 days 03:53:15|
|32|[priyadarshan](https://huggingface.co/priyadarshan)|0 days 03:40:11|
|33|[ali007](https://huggingface.co/ali007)|0 days 03:34:37|
|34|[sbrandeis](https://huggingface.co/sbrandeis)|0 days 03:18:16|
|35|[Preetha](https://huggingface.co/Preetha)|0 days 03:13:47|
|36|[Mrinal](https://huggingface.co/Mrinal)|0 days 03:01:43|
|37|[laxya007](https://huggingface.co/laxya007)|0 days 02:18:34|
|38|[lewtun](https://huggingface.co/lewtun)|0 days 00:34:43|
|39|[Rounak](https://huggingface.co/Rounak)|0 days 00:26:10|
|40|[kshmax](https://huggingface.co/kshmax)|0 days 00:06:38|
### Hardware used
<iframe width="100%" height="251" frameborder="0"
src="https://observablehq.com/embed/@huggingface/sahajbert-hardware?cells=c1_noaws"></iframe>
## Eval results
We evaluate sahajBERT model quality and 2 other model benchmarks ([XLM-R-large](https://huggingface.co/xlm-roberta-large) and [IndicBert](https://huggingface.co/ai4bharat/indic-bert)) by fine-tuning 3 times their pre-trained models on two downstream tasks in Bengali:
- **NER**: a named entity recognition on Bengali split of [WikiANN](https://huggingface.co/datasets/wikiann) dataset
- **NCC**: a multi-class classification task on news Soham News Category Classification dataset from IndicGLUE
| Base pre-trained Model | NER - F1 (mean ± std) | NCC - Accuracy (mean ± std) |
|:-------------:|:-------------:|:-------------:|
|sahajBERT | 95.45 ± 0.53| 91.97 ± 0.47|
|[XLM-R-large](https://huggingface.co/xlm-roberta-large) | 96.48 ± 0.22| 90.05 ± 0.38|
|[IndicBert](https://huggingface.co/ai4bharat/indic-bert) | 92.52 ± 0.45| 74.46 ± 1.91|
### BibTeX entry and citation info
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` --> |
pucpr/clinicalnerpt-medical | 0d889f90b203734b0ba45904781a6779c8eac2b9 | 2021-10-13T09:28:28.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-medical | 197 | 3 | transformers | 3,640 | ---
language: "pt"
widget:
- text: "Hoje realizou avaliacao de mp-cdi, com eletrodos atrial e ventricular."
- text: "Paciente encaminhado a câmera hiperbárica no período da tarde."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Medical
The Medical NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
raynardj/wenyanwen-ancient-translate-to-modern | 164ab975150c37dcdf1bd2e782e2edd4b1add10c | 2022-01-08T04:22:30.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"zh",
"transformers",
"translation",
"古文",
"文言文",
"ancient",
"classical",
"autotrain_compatible"
] | translation | false | raynardj | null | raynardj/wenyanwen-ancient-translate-to-modern | 197 | 3 | transformers | 3,641 | ---
language:
- zh
- zh
tags:
- translation
- 古文
- 文言文
- ancient
- classical
widget:
- text: "此诚危急存亡之秋也"
---
# From Classical(ancient) Chinese to Modern Chinese
> This model translate Classical(ancient) Chinese to Modern Chinese, so I guess who's interested in the problemset can speak at least modern Chinese, hence... let me continue the documentation in Chinese
# 文言文(古文)到现代文的翻译器
> 这个模型已有做成应用, [【随无涯】](https://huggingface.co/spaces/raynardj/duguwen-classical-chinese-to-morden-translate)是一个huggingface spaces + streamlit 的古文阅读应用(含海量书籍), 可以在阅读时翻译
> 输入文言文, 可以是断句 或者 未断句的文言文, 模型会预测现代文的表述。 其他模型:
* 从[现代文翻译到文言文](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
> 从文言文到现代文的翻译器, 欢迎前往[我的github文言诗词项目页面探讨、加⭐️ ](https://github.com/raynardj/yuan)
> 训练语料是就是九十多万句句对, [数据集链接📚](https://github.com/BangBOOM/Classical-Chinese)。 训练时source序列(古文序列), 按照50%的概率整句去除所有标点符号。
## 推荐的inference 通道
**注意**
* 你必须将```generate```函数的```eos_token_id```设置为102就可以翻译出完整的语句, 不然翻译完了会有残留的语句(因为做熵的时候用pad标签=-100导致)。
目前huggingface 页面上compute按钮会有这个问题, 推荐使用以下代码来得到翻译结果
* 请设置```generate```的参数```num_beams>=3```, 以达到较好的翻译效果
* 请设置```generate```的参数```max_length```256, 不然结果会吃掉句子
```python
from transformers import (
EncoderDecoderModel,
AutoTokenizer
)
PRETRAINED = "raynardj/wenyanwen-ancient-translate-to-modern"
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED)
model = EncoderDecoderModel.from_pretrained(PRETRAINED)
def inference(text):
tk_kwargs = dict(
truncation=True,
max_length=128,
padding="max_length",
return_tensors='pt')
inputs = tokenizer([text,],**tk_kwargs)
with torch.no_grad():
return tokenizer.batch_decode(
model.generate(
inputs.input_ids,
attention_mask=inputs.attention_mask,
num_beams=3,
max_length=256,
bos_token_id=101,
eos_token_id=tokenizer.sep_token_id,
pad_token_id=tokenizer.pad_token_id,
), skip_special_tokens=True)
```
## 目前版本的案例
> 当然, 拿比较熟知的语句过来, 通常会有些贻笑大方的失误, 大家如果有好玩的调戏案例, 也欢迎反馈
```python
>>> inference('非我族类其心必异')
['不 是 我 们 的 族 类 , 他 们 的 心 思 必 然 不 同 。']
>>> inference('肉食者鄙未能远谋')
['吃 肉 的 人 鄙 陋 , 不 能 长 远 谋 划 。']
# 这里我好几批模型都翻不出这个**输**字(甚至有一个版本翻成了秦始皇和汉武帝), 可能并不是很古朴的用法,
>>> inference('江山如此多娇引无数英雄竞折腰惜秦皇汉武略输文采唐宗宋祖稍逊风骚')
['江 山 如 此 多 , 招 引 无 数 的 英 雄 , 竞 相 折 腰 , 可 惜 秦 皇 、 汉 武 , 略 微 有 文 采 , 唐 宗 、 宋 祖 稍 稍 逊 出 风 雅 。']
>>> inference("清风徐来水波不兴")
['清 风 慢 慢 吹 来 , 水 波 不 兴 。']
>>> inference("无他唯手熟尔")
['没 有 别 的 事 , 只 是 手 熟 罢 了 。']
>>> inference("此诚危急存亡之秋也")
['这 实 在 是 危 急 存 亡 的 时 候 。']
```
## 其他文言诗词的资源
* [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan)
* [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn)
* [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
* [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)
* [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian)
* [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry) |
tals/albert-base-vitaminc_wnei-fever | 01a830a7f0c8cf62df0b5d29b473179cdf21c525 | 2022-06-22T23:56:28.000Z | [
"pytorch",
"albert",
"text-classification",
"python",
"dataset:fever",
"dataset:glue",
"dataset:tals/vitaminc",
"transformers"
] | text-classification | false | tals | null | tals/albert-base-vitaminc_wnei-fever | 197 | null | transformers | 3,642 | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
unicamp-dl/ptt5-small-portuguese-vocab | 79a744f213b1f3a1a95f27f2230202440be253d1 | 2021-06-23T14:34:42.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"pt",
"dataset:brWaC",
"transformers",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/ptt5-small-portuguese-vocab | 197 | null | transformers | 3,643 | ---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
hellonesh/test_fine_tuned_crossencoder | a93a54a1b47b64fafab55a53cc80c7879096541b | 2022-07-20T23:21:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | hellonesh | null | hellonesh/test_fine_tuned_crossencoder | 197 | null | transformers | 3,644 | Entry not found |
KoichiYasuoka/roberta-classical-chinese-large-char | 6cb9aed2abae0da5fd17c5751598bdaeeb81cc3d | 2021-10-30T00:38:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"lzh",
"transformers",
"classical chinese",
"literary chinese",
"ancient chinese",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-classical-chinese-large-char | 196 | null | transformers | 3,645 | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "masked-lm"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "孟子[MASK]梁惠王"
---
# roberta-classical-chinese-large-char
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-large-char` for downstream tasks, such as [sentence-segmentation](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation), [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-upos), [dependency-parsing](https://github.com/KoichiYasuoka/SuPar-Kanbun), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
```
## See Also
[SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
|
asapp/sew-tiny-100k-ft-ls100h | a750ca77c83e5595081eddb05926c9b432684116 | 2022-05-24T12:53:02.000Z | [
"pytorch",
"sew",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"audio",
"speech",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | asapp | null | asapp/sew-tiny-100k-ft-ls100h | 196 | null | transformers | 3,646 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- speech
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: sew-tiny-100k-ft-ls100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 10.61
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 23.74
---
# SEW-tiny
[SEW by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, SEWForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load the model and preprocessor
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-tiny-100k-ft-ls100h")
model = SEWForCTC.from_pretrained("asapp/sew-tiny-100k-ft-ls100h")
# load the dummy dataset with speech samples
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# preprocess
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **asapp/sew-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import SEWForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = SEWForCTC.from_pretrained("asapp/sew-tiny-100k-ft-ls100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-tiny-100k-ft-ls100h")
def map_to_pred(batch):
input_values = processor(batch["audio"][0]["array"], sampling_rate=16000,
return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
| --- | --- |
| 10.61 | 23.74 |
|
federicopascual/finetuning-sentiment-model-3000-samples | 11f7d327123ebcddd97304c57084c6365628dda5 | 2021-12-30T20:59:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | federicopascual | null | federicopascual/finetuning-sentiment-model-3000-samples | 196 | null | transformers | 3,647 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8734177215189873
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3404
- Accuracy: 0.8667
- F1: 0.8734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
flax-community/t5-base-wikisplit | 2aafb7c5b0409872cea248eb548aba7405797d83 | 2021-07-21T08:56:00.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | flax-community | null | flax-community/t5-base-wikisplit | 196 | 1 | transformers | 3,648 | Entry not found |
nandinib1999/quote-generator | 43fed603ac9d6e99afba01ae8924b57630a3142c | 2022-03-06T12:04:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:quotes-500K",
"transformers",
"text generation",
"license:cc"
] | text-generation | false | nandinib1999 | null | nandinib1999/quote-generator | 196 | 2 | transformers | 3,649 | ---
language:
- en
thumbnail:
tags:
- text generation
license: cc
datasets:
- quotes-500K
metrics:
- perplexity
---
# Quotes Generator
## Model description
This is a GPT2 model fine-tuned on the Quotes-500K dataset.
## Intended uses & limitations
For a given user prompt, it can generate motivational quotes starting with it.
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("nandinib1999/quote-generator")
model = AutoModelWithLMHead.from_pretrained("nandinib1999/quote-generator")
```
## Training data
This is the distribution of the total dataset into training, validation and test dataset for the fine-tuning task.
<table style="width:30%">
<tr>
<th>train</th>
<td>349796</td>
</tr>
<tr>
<th>validation</th>
<td>99942</td>
</tr>
<tr>
<th>test</th>
<td>49971</td>
</tr>
</table>
## Training procedure
The model was fine-tuned using the Google Colab GPU for one epoch. The weights of the pre-trained GPT2 model were used as a base.
## Eval results
<table style="width:30%">
<tr>
<th>Epoch</th>
<th>Perplexity</th>
</tr>
<tr>
<td>1</td>
<td>15.180</td>
</tr>
</table> |
qanastek/pos-french | b377b7ff5d5b93286e8b3103104d7e5135d5f5be | 2022-07-06T23:48:25.000Z | [
"pytorch",
"fr",
"dataset:qanastek/ANTILLES",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | qanastek | null | qanastek/pos-french | 196 | 1 | flair | 3,650 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: fr
datasets:
- qanastek/ANTILLES
widget:
- text: "George Washington est allé à Washington"
---
# POET: A French Extended Part-of-Speech Tagger
- Corpora: [ANTILLES](https://github.com/qanastek/ANTILLES)
- Embeddings: [FastText](https://fasttext.cc/)
- Sequence Labelling: [Bi-LSTM-CRF](https://arxiv.org/abs/1011.4088)
- Number of Epochs: 115
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
* [DUFOUR Richard](https://cv.archives-ouvertes.fr/richard-dufour) (2)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
2. [LS2N, TALN team](https://www.ls2n.fr/equipe/taln/), Nantes University, Nantes, France.
## Demo: How to use in Flair
Requires [Flair](https://pypi.org/project/flair/): ```pip install flair```
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# Load the model
model = SequenceTagger.load("qanastek/pos-french")
sentence = Sentence("George Washington est allé à Washington")
# Predict tags
model.predict(sentence)
# Print predicted pos tags
print(sentence.to_tagged_string())
```
Output:

## Training data
`ANTILLES` is a part-of-speech tagging corpora based on [UD_French-GSD](https://universaldependencies.org/treebanks/fr_gsd/index.html) which was originally created in 2015 and is based on the [universal dependency treebank v2.0](https://github.com/ryanmcd/uni-dep-tb).
Originally, the corpora consists of 400,399 words (16,341 sentences) and had 17 different classes. Now, after applying our tags augmentation we obtain 60 different classes which add linguistic and semantic information such as the gender, number, mood, person, tense or verb form given in the different CoNLL-03 fields from the original corpora.
We based our tags on the level of details given by the [LIA_TAGG](http://pageperso.lif.univ-mrs.fr/frederic.bechet/download.html) statistical POS tagger written by [Frédéric Béchet](http://pageperso.lif.univ-mrs.fr/frederic.bechet/index-english.html) in 2001.
The corpora used for this model is available on [Github](https://github.com/qanastek/ANTILLES) at the [CoNLL-U format](https://universaldependencies.org/format.html).
Training data are fed to the model as free language and doesn't pass a normalization phase. Thus, it's made the model case and punctuation sensitive.
## Original Tags
```plain
PRON VERB SCONJ ADP CCONJ DET NOUN ADJ AUX ADV PUNCT PROPN NUM SYM PART X INTJ
```
## New additional POS tags
| Abbreviation | Description | Examples |
|:--------:|:--------:|:--------:|
| PREP | Preposition | de |
| AUX | Auxiliary Verb | est |
| ADV | Adverb | toujours |
| COSUB | Subordinating conjunction | que |
| COCO | Coordinating Conjunction | et |
| PART | Demonstrative particle | -t |
| PRON | Pronoun | qui ce quoi |
| PDEMMS | Demonstrative Pronoun - Singular Masculine | ce |
| PDEMMP | Demonstrative Pronoun - Plural Masculine | ceux |
| PDEMFS | Demonstrative Pronoun - Singular Feminine | cette |
| PDEMFP | Demonstrative Pronoun - Plural Feminine | celles |
| PINDMS | Indefinite Pronoun - Singular Masculine | tout |
| PINDMP | Indefinite Pronoun - Plural Masculine | autres |
| PINDFS | Indefinite Pronoun - Singular Feminine | chacune |
| PINDFP | Indefinite Pronoun - Plural Feminine | certaines |
| PROPN | Proper noun | Houston |
| XFAMIL | Last name | Levy |
| NUM | Numerical Adjective | trentaine vingtaine |
| DINTMS | Masculine Numerical Adjective | un |
| DINTFS | Feminine Numerical Adjective | une |
| PPOBJMS | Pronoun complements of objects - Singular Masculine | le lui |
| PPOBJMP | Pronoun complements of objects - Plural Masculine | eux y |
| PPOBJFS | Pronoun complements of objects - Singular Feminine | moi la |
| PPOBJFP | Pronoun complements of objects - Plural Feminine | en y |
| PPER1S | Personal Pronoun First-Person - Singular | je |
| PPER2S | Personal Pronoun Second-Person - Singular | tu |
| PPER3MS | Personal Pronoun Third-Person - Singular Masculine | il |
| PPER3MP | Personal Pronoun Third-Person - Plural Masculine | ils |
| PPER3FS | Personal Pronoun Third-Person - Singular Feminine | elle |
| PPER3FP | Personal Pronoun Third-Person - Plural Feminine | elles |
| PREFS | Reflexive Pronoun First-Person - Singular | me m' |
| PREF | Reflexive Pronoun Third-Person - Singular | se s' |
| PREFP | Reflexive Pronoun First / Second-Person - Plural | nous vous |
| VERB | Verb | obtient |
| VPPMS | Past Participle - Singular Masculine | formulé |
| VPPMP | Past Participle - Plural Masculine | classés |
| VPPFS | Past Participle - Singular Feminine | appelée |
| VPPFP | Past Participle - Plural Feminine | sanctionnées |
| DET | Determinant | les l' |
| DETMS | Determinant - Singular Masculine | les |
| DETFS | Determinant - Singular Feminine | la |
| ADJ | Adjective | capable sérieux |
| ADJMS | Adjective - Singular Masculine | grand important |
| ADJMP | Adjective - Plural Masculine | grands petits |
| ADJFS | Adjective - Singular Feminine | française petite |
| ADJFP | Adjective - Plural Feminine | légères petites |
| NOUN | Noun | temps |
| NMS | Noun - Singular Masculine | drapeau |
| NMP | Noun - Plural Masculine | journalistes |
| NFS | Noun - Singular Feminine | tête |
| NFP | Noun - Plural Feminine | ondes |
| PREL | Relative Pronoun | qui dont |
| PRELMS | Relative Pronoun - Singular Masculine | lequel |
| PRELMP | Relative Pronoun - Plural Masculine | lesquels |
| PRELFS | Relative Pronoun - Singular Feminine | laquelle |
| PRELFP | Relative Pronoun - Plural Feminine | lesquelles |
| INTJ | Interjection | merci bref |
| CHIF | Numbers | 1979 10 |
| SYM | Symbol | € % |
| YPFOR | Endpoint | . |
| PUNCT | Ponctuation | : , |
| MOTINC | Unknown words | Technology Lady |
| X | Typos & others | sfeir 3D statu |
## Evaluation results
The test corpora used for this evaluation is available on [Github](https://github.com/qanastek/ANTILLES/blob/main/ANTILLES/test.conllu).
```plain
Results:
- F-score (micro): 0.952
- F-score (macro): 0.8644
- Accuracy (incl. no class): 0.952
By class:
precision recall f1-score support
PPER1S 0.9767 1.0000 0.9882 42
VERB 0.9823 0.9537 0.9678 583
COSUB 0.9344 0.8906 0.9120 128
PUNCT 0.9878 0.9688 0.9782 833
PREP 0.9767 0.9879 0.9822 1483
PDEMMS 0.9583 0.9200 0.9388 75
COCO 0.9839 1.0000 0.9919 245
DET 0.9679 0.9814 0.9746 645
NMP 0.9521 0.9115 0.9313 305
ADJMP 0.8352 0.9268 0.8786 82
PREL 0.9324 0.9857 0.9583 70
PREFP 0.9767 0.9545 0.9655 44
AUX 0.9537 0.9859 0.9695 355
ADV 0.9440 0.9365 0.9402 504
VPPMP 0.8667 1.0000 0.9286 26
DINTMS 0.9919 1.0000 0.9959 122
ADJMS 0.9020 0.9057 0.9039 244
NMS 0.9226 0.9336 0.9281 753
NFS 0.9347 0.9714 0.9527 560
YPFOR 0.9806 1.0000 0.9902 353
PINDMS 1.0000 0.9091 0.9524 44
NOUN 0.8400 0.5385 0.6562 39
PROPN 0.8605 0.8278 0.8439 395
DETMS 0.9972 0.9972 0.9972 362
PPER3MS 0.9341 0.9770 0.9551 87
VPPMS 0.8994 0.9682 0.9325 157
DETFS 1.0000 1.0000 1.0000 240
ADJFS 0.9266 0.9011 0.9136 182
ADJFP 0.9726 0.9342 0.9530 76
NFP 0.9463 0.9749 0.9604 199
VPPFS 0.8000 0.9000 0.8471 40
CHIF 0.9543 0.9414 0.9478 222
XFAMIL 0.9346 0.8696 0.9009 115
PPER3MP 0.9474 0.9000 0.9231 20
PPOBJMS 0.8800 0.9362 0.9072 47
PREF 0.8889 0.9231 0.9057 52
PPOBJMP 1.0000 0.6000 0.7500 10
SYM 0.9706 0.8684 0.9167 38
DINTFS 0.9683 1.0000 0.9839 61
PDEMFS 1.0000 0.8966 0.9455 29
PPER3FS 1.0000 0.9444 0.9714 18
VPPFP 0.9500 1.0000 0.9744 19
PRON 0.9200 0.7419 0.8214 31
PPOBJFS 0.8333 0.8333 0.8333 6
PART 0.8000 1.0000 0.8889 4
PPER3FP 1.0000 1.0000 1.0000 2
MOTINC 0.3571 0.3333 0.3448 15
PDEMMP 1.0000 0.6667 0.8000 3
INTJ 0.4000 0.6667 0.5000 6
PREFS 1.0000 0.5000 0.6667 10
ADJ 0.7917 0.8636 0.8261 22
PINDMP 0.0000 0.0000 0.0000 1
PINDFS 1.0000 1.0000 1.0000 1
NUM 1.0000 0.3333 0.5000 3
PPER2S 1.0000 1.0000 1.0000 2
PPOBJFP 1.0000 0.5000 0.6667 2
PDEMFP 1.0000 0.6667 0.8000 3
X 0.0000 0.0000 0.0000 1
PRELMS 1.0000 1.0000 1.0000 2
PINDFP 1.0000 1.0000 1.0000 1
accuracy 0.9520 10019
macro avg 0.8956 0.8521 0.8644 10019
weighted avg 0.9524 0.9520 0.9515 10019
```
## BibTeX Citations
Please cite the following paper when using this model.
ANTILLES corpus and POET taggers:
```latex
@inproceedings{labrak:hal-03696042,
TITLE = {{ANTILLES: An Open French Linguistically Enriched Part-of-Speech Corpus}},
AUTHOR = {Labrak, Yanis and Dufour, Richard},
URL = {https://hal.archives-ouvertes.fr/hal-03696042},
BOOKTITLE = {{25th International Conference on Text, Speech and Dialogue (TSD)}},
ADDRESS = {Brno, Czech Republic},
PUBLISHER = {{Springer}},
YEAR = {2022},
MONTH = Sep,
KEYWORDS = {Part-of-speech corpus ; POS tagging ; Open tools ; Word embeddings ; Bi-LSTM ; CRF ; Transformers},
PDF = {https://hal.archives-ouvertes.fr/hal-03696042/file/ANTILLES_A_freNch_linguisTIcaLLy_Enriched_part_of_Speech_corpus.pdf},
HAL_ID = {hal-03696042},
HAL_VERSION = {v1},
}
```
UD_French-GSD corpora:
```latex
@misc{
universaldependencies,
title={UniversalDependencies/UD_French-GSD},
url={https://github.com/UniversalDependencies/UD_French-GSD}, journal={GitHub},
author={UniversalDependencies}
}
```
LIA TAGG:
```latex
@techreport{LIA_TAGG,
author = {Frédéric Béchet},
title = {LIA_TAGG: a statistical POS tagger + syntactic bracketer},
institution = {Aix-Marseille University & CNRS},
year = {2001}
}
```
Flair Embeddings:
```latex
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
## Acknowledgment
This work was financially supported by [Zenidoc](https://zenidoc.fr/)
|
google/ddpm-church-256 | 14e3c702d021d2563e924f2682ac4bd31464f888 | 2022-07-21T15:00:14.000Z | [
"diffusers",
"arxiv:2006.11239",
"pytorch",
"unconditional-image-generation",
"license:apache-2.0"
] | unconditional-image-generation | false | google | null | google/ddpm-church-256 | 196 | null | diffusers | 3,651 | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-church-256"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm()["sample"]
# save image
image[0].save("ddpm_generated_image.png")
```
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
## Samples
1. 
2. 
3. 
4.  |
Geotrend/distilbert-base-es-cased | d1a2feac7e5bdd65b5d659d766a80315a1e075a5 | 2021-08-16T13:26:35.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"es",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-es-cased | 195 | null | transformers | 3,652 | ---
language: es
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-es-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-es-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
SkolkovoInstitute/xlmr_formality_classifier | b491648728ab0fab087598e4eea3e123be7155e6 | 2022-01-09T17:55:13.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"fr",
"it",
"pt",
"transformers",
"formal or informal classification"
] | text-classification | false | SkolkovoInstitute | null | SkolkovoInstitute/xlmr_formality_classifier | 195 | 0 | transformers | 3,653 | ---
language:
- en
- fr
- it
- pt
tags:
- formal or informal classification
licenses:
- cc-by-nc-sa
---
XLMRoberta-based classifier trained on XFORMAL.
all
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.744912 | 0.927790 | 0.826354 | 108019 |
| 1 | 0.889088 | 0.645630 | 0.748048 | 96845 |
| accuracy | | | 0.794405 | 204864 |
| macro avg | 0.817000 | 0.786710 | 0.787201 | 204864 |
| weighted avg | 0.813068 | 0.794405 | 0.789337 | 204864 |
en
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.800053 | 0.962981 | 0.873988 | 22151 |
| 1 | 0.945106 | 0.725899 | 0.821124 | 19449 |
| accuracy | | | 0.852139 | 41600 |
| macro avg | 0.872579 | 0.844440 | 0.847556 | 41600 |
| weighted avg | 0.867869 | 0.852139 | 0.849273 | 41600 |
fr
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.746709 | 0.925738 | 0.826641 | 21505 |
| 1 | 0.887305 | 0.650592 | 0.750731 | 19327 |
| accuracy | | | 0.795504 | 40832 |
| macro avg | 0.817007 | 0.788165 | 0.788686 | 40832 |
| weighted avg | 0.813257 | 0.795504 | 0.790711 | 40832 |
it
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.721282 | 0.914669 | 0.806545 | 21528 |
| 1 | 0.864887 | 0.607135 | 0.713445 | 19368 |
| accuracy | | | 0.769024 | 40896 |
| macro avg | 0.793084 | 0.760902 | 0.759995 | 40896 |
| weighted avg | 0.789292 | 0.769024 | 0.762454 | 40896 |
pt
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.717546 | 0.908167 | 0.801681 | 21637 |
| 1 | 0.853628 | 0.599700 | 0.704481 | 19323 |
| accuracy | | | 0.762646 | 40960 |
| macro avg | 0.785587 | 0.753933 | 0.753081 | 40960 |
| weighted avg | 0.781743 | 0.762646 | 0.755826 | 40960 |
## How to use
```python
from transformers import XLMRobertaTokenizerFast, XLMRobertaForSequenceClassification
# load tokenizer and model weights
tokenizer = XLMRobertaTokenizerFast.from_pretrained('SkolkovoInstitute/xlmr_formality_classifier')
model = XLMRobertaForSequenceClassification.from_pretrained('SkolkovoInstitute/xlmr_formality_classifier')
# prepare the input
batch = tokenizer.encode('ты супер', return_tensors='pt')
# inference
model(batch)
```
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png |
aubmindlab/araelectra-base-generator | 3783d1be2a43dfede443a934410539f753122c41 | 2022-04-07T11:31:12.000Z | [
"pytorch",
"tf",
"tensorboard",
"electra",
"fill-mask",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2012.15516",
"transformers",
"autotrain_compatible"
] | fill-mask | false | aubmindlab | null | aubmindlab/araelectra-base-generator | 195 | null | transformers | 3,654 | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
widget:
- text: " عاصمة لبنان هي [MASK] ."
---
# AraELECTRA
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraELECTRA.png" width="100" align="left"/>
**ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset.
For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516).
## How to use the generator in `transformers`
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="aubmindlab/araelectra-base-generator",
tokenizer="aubmindlab/araelectra-base-generator"
)
print(
fill_mask(" عاصمة لبنان هي [MASK] .)
)
```
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="aubmindlab/araelectra-base"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
```
# Model
Model | HuggingFace Model Name | Size (MB/Params)|
---|:---:|:---:
AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M |
AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M |
# Compute
Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24
# Dataset
The pretraining data used for the new AraELECTRA model is also used for **AraGPT2 and AraELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# TensorFlow 1.x models
**You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username**
- `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-araelectra,
title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.20",
pages = "191--195",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
koala/kykim-bert-kor-base-ko | d0944e73af133a5e221634d0063bf9af21eef197 | 2021-12-10T12:31:20.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | koala | null | koala/kykim-bert-kor-base-ko | 195 | null | transformers | 3,655 | Entry not found |
l3cube-pune/MarathiSentiment | 61900f644a268997fcf84e4529cec46986daeaf0 | 2021-05-18T07:35:10.000Z | [
"pytorch",
"tf",
"albert",
"text-classification",
"mr",
"dataset:L3CubeMahaSent",
"arxiv:2103.11408",
"transformers",
"license:cc-by-4.0"
] | text-classification | false | l3cube-pune | null | l3cube-pune/MarathiSentiment | 195 | 2 | transformers | 3,656 | ---
language: mr
tags:
- albert
license: cc-by-4.0
datasets:
- L3CubeMahaSent
widget:
- text: "I like you. </s></s> I love you."
---
## MarathiSentiment
MarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (http://arxiv.org/abs/2103.11408)
```
@inproceedings{kulkarni2021l3cubemahasent,
title={L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset},
author={Kulkarni, Atharva and Mandhane, Meet and Likhitkar, Manali and Kshirsagar, Gayatri and Joshi, Raviraj},
booktitle={Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis},
pages={213--220},
year={2021}
}
``` |
monologg/bert-base-cased-goemotions-group | 8db941660d9c02eeca1357aa6e9e98844f73b5e1 | 2021-05-19T23:48:19.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | monologg | null | monologg/bert-base-cased-goemotions-group | 195 | 2 | transformers | 3,657 | Entry not found |
yuchenlin/BART0 | ed9861f2002cc16f79ad3953364d764668d52874 | 2022-05-03T01:31:34.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | yuchenlin | null | yuchenlin/BART0 | 195 | 2 | transformers | 3,658 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match against Barca."
- text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
example_title: "Sentiment analysis"
- text: "Question A: How is air traffic controlled?
\nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady.
\nIn the previous sentence, decide who 'her' is referring to."
example_title: "Coreference resolution"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n
Select the category for the above sentence from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n
Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n
Do sentences 1 and 2 have the same meaning?"
example_title: "Paraphrase identification"
- text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n
The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n
(CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best."
- text: "Max: Know any good websites to buy clothes from?\n
Payton: Sure :) LINK 1, LINK 2, LINK 3\n
Max: That's a lot of them!\n
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n
Max: I'll check them out. Thanks.\n\n
Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n
Sentence A: you can leave the books on the table over there.\n
Sentence B: the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n
Which book is the leftmost book?"
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n
Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n
Who are the men running for mayor?"
example_title: "Reading comprehension"
- text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n
Which of the following best characterizes binne bams?\n
- Sentence 1: Binne bams are for pets.\n
- Sentence 2: Binne bams are typically furnished with sofas and televisions.\n
- Sentence 3: Binne bams are luxurious apartments.\n
- Sentence 4: Binne bams are places where people live."
---
A BART-large version of T0.
Please check https://inklab.usc.edu/ReCross/ for more details. |
patrickramosobf/bert-base-japanese-v2-wrime-fine-tune | 060beffbf77befb673463d6c742af900f95d879d | 2022-05-22T16:34:55.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ja",
"dataset:wrime",
"transformers",
"license:cc-by-sa-3.0"
] | text-classification | false | patrickramosobf | null | patrickramosobf/bert-base-japanese-v2-wrime-fine-tune | 195 | null | transformers | 3,659 | ---
license: cc-by-sa-3.0
language:
- ja
tag:
- emotion-analysis
datasets:
- wrime
widget:
- text: "車のタイヤがパンクしてた。。いたずらの可能性が高いんだって。。"
---
# WRIME-fine-tuned BERT base Japanese
This model is a [Japanese BERT<sub>BASE</sub>](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) fine-tuned on the [WRIME](https://github.com/ids-cv/wrime) dataset. It was trained as part of the paper ["Emotion Analysis of Writers and Readers of Japanese Tweets on Vaccinations"](https://aclanthology.org/2022.wassa-1.10/). Fine-tuning code is available at this [repo](https://github.com/PatrickJohnRamos/BERT-Japan-vaccination).
# Intended uses and limitations
This model can be used to predict intensities scores for eight emotions for writers and readers. Please refer to the `Fine-tuning data` section for the list of emotions.
Because of the regression fine-tuning task, it is possible for the model to infer scores outside of the range of the scores of the fine-tuning data (`score < 0` or `score > 4`).
# Model Architecture, Tokenization, and Pretraining
The Japanese BERT<sub>BASE</sub> fine-tuned was `cl-tohoku/bert-base-japanese-v2`. Please refer to their [model card](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) for details regarding the model architecture, tokenization, pretraining data, and pretraining procedure.
# Fine-tuning data
The model is fine-tuned on [WRIME](https://github.com/ids-cv/wrime), a dataset of Japanese Tweets annotated with writer and reader emotion intensities. We use version 1 of the dataset. Each Tweet is accompanied by a set of writer emotion intensities (from the author of the Tweet) and three sets of reader emotions (from three annotators). The emotions follow Plutchhik's emotions, namely:
* joy
* sadness
* anticipation
* surprise
* anger
* fear
* disgust
* trust
These emotion intensities follow a four-point scale:
| emotion intensity | emotion presence|
|---|---|
| 0 | no |
| 1 | weak |
| 2 | medium |
| 3 | strong |
# Fine-tuning
The BERT is fine-tuned to directly regress the emotion intensities of the writer and the averaged emotions of the readers from each Tweet, meaning there are 16 outputs (8 emotions per writer/reader).
The fine-tuning was inspired by common BERT fine-tuning procedures. The BERT was fine-tuned on WRIME for 3 epochs using the AdamW optimizer with a learning rate of 2e-5, β<sub>1</sub>=0.9, β<sub>2</sub>=0.999, weight decay of 0.01, linear decay, a warmup ratio of 0.01, and a batch size of 32. Training was conducted with an NVIDIA Tesla K80 and finished in 3 hours.
# Evaluation results
Below are the MSEs of the BERT on the test split of WRIME.
| Annotator | Joy | Sadness | Anticipation | Surprise | Anger | Fear | Disgust | Trust | Overall |
|---|---|---|---|---|---|---|---|---|---|
| Writer | 0.658 | 0.688 | 0.746 | 0.542 | 0.486 | 0.462 | 0.664 | 0.400 | 0.581 |
| Reader | 0.192 | 0.178 | 0.211 | 0.139 | 0.032 | 0.147 | 0.123 | 0.029 | 0.131 |
| Both | 0.425 | 0.433 | 0.479 | 0.341 | 0.259 | 0.304 | 0.394 | 0.214 | 0.356 | |
ckb/en-toki-mt | 546b3df927463696d71dbee5b6028148f46f99b1 | 2022-07-09T09:48:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tok",
"transformers",
"generated_from_trainer",
"translation",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | ckb | null | ckb/en-toki-mt | 195 | null | transformers | 3,660 | ---
license: apache-2.0
language:
- en
- tok
tags:
- generated_from_trainer
- translation
model-index:
- name: en-toki-mt
results: []
widget:
- text: "Hello, my name is Tom."
- text: "Can the cat speak English?"
---
# en-toki-mt
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) on the English - toki pona translation dataset on Tatoeba.
## Model description
toki pona is a minimalist constructed language created in 2014 by Sonja Lang. The language features a very small volcabulary (~130 words) and a very simple grammar structure.
## Intended uses & limitations
This model aims to translate English to Toki pona.
## Training and evaluation data
The training data is acquired from all En-Toki sentence pairs on [Tatoeba](https://tatoeba.org/en) (~20000 pairs), without any filtering. Since this dataset mostly only includes core words (pu), it may produce inaccurate results when encountering more complex words. The model achieved a BLEU score of 54 on the testing set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DaNLP/da-bert-ner | 9fe6e2483f6a38b6800b63273e4f9848cf334aeb | 2021-09-17T11:18:07.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"da",
"dataset:DaNE",
"transformers",
"ner",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | DaNLP | null | DaNLP/da-bert-ner | 194 | null | transformers | 3,661 | ---
language:
- da
tags:
- ner
- bert
- pytorch
- transformers
license: cc-by-sa-4.0
datasets:
- DaNE
metrics:
- f1
widget:
- text: "Jens Peter Hansen kommer fra Danmark"
---
# BERT fine-tuned for Named Entity Recognition in Danish
The model tags tokens (in Danish sentences) with named entity tags (BIO format) [PER, ORG, LOC, MISC].
The pretrained language model used for fine-tuning is the [Danish BERT](https://github.com/certainlyio/nordic_bert) by BotXO.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ner.html#bert) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForTokenClassification
model = BertForTokenClassification.from_pretrained("DaNLP/da-bert-ner")
tokenizer = BertTokenizer.from_pretrained("DaNLP/da-bert-ner")
```
## Training Data
The model has been trained on the [DaNE](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dane).
|
ans/vaccinating-covid-tweets | ddb15134bf56cda36aea875c56822ab9074d768d | 2021-06-18T04:12:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweets",
"transformers",
"license:apache-2.0"
] | text-classification | false | ans | null | ans/vaccinating-covid-tweets | 194 | 1 | transformers | 3,662 | ---
language: en
license: apache-2.0
datasets:
- tweets
widget:
- text: "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic."
---
# Disclaimer: This page is under maintenance. Please DO NOT refer to the information on this page to make any decision yet.
# Vaccinating COVID tweets
A fine-tuned model for fact-classification task on English tweets about COVID-19/vaccine.
## Intended uses & limitations
You can classify if the input tweet (or any others statement) about COVID-19/vaccine is `true`, `false` or `misleading`.
Note that since this model was trained with data up to May 2020, the most recent information may not be reflected.
#### How to use
You can use this model directly on this page or using `transformers` in python.
- Load pipeline and implement with input sequence
```python
from transformers import pipeline
pipe = pipeline("sentiment-analysis", model = "ans/vaccinating-covid-tweets")
seq = "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic."
pipe(seq)
```
- Expected output
```python
[
{
"label": "false",
"score": 0.07972867041826248
},
{
"label": "misleading",
"score": 0.019911376759409904
},
{
"label": "true",
"score": 0.9003599882125854
}
]
```
- `true` examples
```python
"By the end of 2020, several vaccines had become available for use in different parts of the world."
"Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic."
"RNA vaccines were the first vaccines for SARS-CoV-2 to be produced and represent an entirely new vaccine approach."
```
- `false` examples
```python
"COVID-19 vaccine caused new strain in UK."
```
#### Limitations and bias
To conservatively classify whether an input sequence is true or not, the model may have predictions biased toward `false` or `misleading`.
## Training data & Procedure
#### Pre-trained baseline model
- Pre-trained model: [BERTweet](https://github.com/VinAIResearch/BERTweet)
- trained based on the RoBERTa pre-training procedure
- 850M General English Tweets (Jan 2012 to Aug 2019)
- 23M COVID-19 English Tweets
- Size of the model: >134M parameters
- Further training
- Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification
#### 1) Pre-training language model
- The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet.
- Following datasets on English tweets were used:
- Tweets with trending #CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 ([kaggle](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets))
- Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 ([kaggle](https://www.kaggle.com/gpreda/all-covid19-vaccines-tweets))
- COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 ([github](https://github.com/thepanacealab/covid19_twitter))
#### 2) Fine-tuning for fact classification
- A fine-tuned model from pre-trained language model (1) for fact-classification task on COVID-19/vaccine.
- COVID-19/vaccine-related statements were collected from [Poynter](https://www.poynter.org/ifcn-covid-19-misinformation/) and [Snopes](https://www.snopes.com/) using Selenium resulting in over 14,000 fact-checked statements from Jan 2020 to May 2021.
- Original labels were divided within following three categories:
- `False`: includes false, no evidence, manipulated, fake, not true, unproven and unverified
- `Misleading`: includes misleading, exaggerated, out of context and needs context
- `True`: includes true and correct
## Evaluation results
| Training loss | Validation loss | Training accuracy | Validation accuracy |
| --- | --- | --- | --- |
| 0.1062 | 0.1006 | 96.3% | 94.5% |
# Contributors
- This model is a part of final team project from MLDL for DS class at SNU.
- Team BIBI - Vaccinating COVID-NineTweets
- Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom
- Advisor: Prof. Wen-Syan Li
<a href="https://gsds.snu.ac.kr/"><img src="https://gsds.snu.ac.kr/wp-content/uploads/sites/50/2021/04/GSDS_logo2-e1619068952717.png" width="200" height="80"></a> |
hfl/cino-large-v2 | 39af3c16b2256141ad3306fc2be6841f6cc76aec | 2022-01-24T10:40:50.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"zh",
"bo",
"kk",
"ko",
"mn",
"ug",
"yue",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | hfl | null | hfl/cino-large-v2 | 194 | 2 | transformers | 3,663 | ---
language:
- zh
- bo
- kk
- ko
- mn
- ug
- yue
license: "apache-2.0"
---
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)
Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.
We have seen rapid progress on building multilingual PLMs in recent year.
However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.
To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as
- Chinese,中文(zh)
- Tibetan,藏语(bo)
- Mongolian (Uighur form),蒙语(mn)
- Uyghur,维吾尔语(ug)
- Kazakh (Arabic form),哈萨克语(kk)
- Korean,朝鲜语(ko)
- Zhuang,壮语
- Cantonese,粤语(yue)
Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM
You may also interested in,
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
|
jchen/DialoGPT-evan | 161625a480f8188bdd82787fd0d3d43373f2dd44 | 2021-10-23T04:12:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jchen | null | jchen/DialoGPT-evan | 194 | null | transformers | 3,664 | ---
tags:
- conversational
---
# Evan Model |
uer/chinese_roberta_L-12_H-256 | 4aeb03283d2d0ff43cffec9674a1ac4621055f68 | 2022-07-15T08:15:47.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-12_H-256 | 194 | null | transformers | 3,665 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
DemangeJeremy/4-sentiments-with-flaubert | 1795c955197e73fe016a4a1b03073eb68d7f8430 | 2021-03-29T00:03:14.000Z | [
"pytorch",
"flaubert",
"text-classification",
"fr",
"transformers",
"sentiments",
"french",
"flaubert-large"
] | text-classification | false | DemangeJeremy | null | DemangeJeremy/4-sentiments-with-flaubert | 193 | null | transformers | 3,666 | ---
language: fr
tags:
- sentiments
- text-classification
- flaubert
- french
- flaubert-large
---
# Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive)
Les travaux sont actuellement en cours. Je modifierai le modèle ces prochains jours.
### Comment l'utiliser ?
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
loaded_tokenizer = AutoTokenizer.from_pretrained('flaubert/flaubert_large_cased')
loaded_model = AutoModelForSequenceClassification.from_pretrained("DemangeJeremy/4-sentiments-with-flaubert")
nlp = pipeline('sentiment-analysis', model=loaded_model, tokenizer=loaded_tokenizer)
print(nlp("Je suis plutôt confiant."))
```
```
[{'label': 'OBJECTIVE', 'score': 0.3320835530757904}]
```
## Résultats de l'évaluation du modèle
| Epoch | Validation Loss | Samples Per Second |
|:------:|:--------------:|:------------------:|
| 1 | 2.219246 | 49.476000 |
| 2 | 1.883753 | 47.259000 |
| 3 | 1.747969 | 44.957000 |
| 4 | 1.695606 | 43.872000 |
| 5 | 1.641470 | 45.726000 |
## Citation
Pour toute utilisation de ce modèle, merci d'utiliser cette citation :
> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <https://huggingface.co/DemangeJeremy/4-sentiments-with-flaubert>
|
mbien/recipenlg | b690df9e40f6bf58c7cf8d96e1a91e101134167b | 2021-05-23T08:56:58.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mbien | null | mbien/recipenlg | 193 | 1 | transformers | 3,667 | # RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation
Model accompanying our INLG 2020 paper: [RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation](https://www.aclweb.org/anthology/2020.inlg-1.4.pdf)
## Where is the dataset?
Please visit the website of our project: [recipenlg.cs.put.poznan.pl](https://recipenlg.cs.put.poznan.pl/) to download it.
## How to use the model? Could you explain X andy Y?
Yes, sure! If you feel some information is missing in our paper, please check first in our [thesis](https://www.researchgate.net/publication/345308878_Cooking_recipes_generator_utilizing_a_deep_learning-based_language_model), which is much more detailed. In case of further questions, you're invited to send us a github issue, we will respond as fast as we can!
|
persiannlp/mt5-base-parsinlu-translation_en_fa | 3a107aa2674d8f097adea1c7e1c0332dfc5236f5 | 2021-09-23T16:20:09.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"machine-translation",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-base-parsinlu-translation_en_fa | 193 | null | transformers | 3,668 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (English -> Persian).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;")
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?")
```
which should output:
```
['خدا را شکر که عامل خطرناک و محافظ دنیاست.']
['خود را سفید می کند و به شکل برادرانه ای در کارخانه ها و']
['او از تمامی همکاران و سازمان هایی که از او حمایت می کردند تشکر']
['برگزاری مسابقات بین آوریل تا دسامبر در هیپوگریم والی']
['من می خواهم تحصیل دکترای علوم کامپیوتری را در مورد شب']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
helliun/polhol | 20558877f2958c1c49a6f11440e6594ed59d9d5b | 2022-06-17T20:18:08.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | helliun | null | helliun/polhol | 193 | null | transformers | 3,669 | Entry not found |
DeepPavlov/distilrubert-tiny-cased-conversational-5k | b81d5917df739af393e4aef61b7199fb04c502a7 | 2022-06-28T17:05:02.000Z | [
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
] | null | false | DeepPavlov | null | DeepPavlov/distilrubert-tiny-cased-conversational-5k | 193 | null | transformers | 3,670 | ---
language:
- ru
---
# distilrubert-tiny-cased-conversational-5k
Conversational DistilRuBERT-tiny-5k \(Russian, cased, 3‑layers, 264‑hidden, 12‑heads, 3.6M parameters, 5k vocab\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)).
Our DistilRuBERT-tiny-5k is highly inspired by \[3\], \[4\] and architecture is very close to \[5\]. Namely, we use
* MLM loss (between token labels and student output distribution)
* KL loss (between averaged student and teacher hidden states)
The key feature is:
* reduced vocabulary size (5K vs 30K in *tiny* vs. 100K in *base* and *small*)
Here is comparison between teacher model (`Conversational RuBERT`) and other distilled models.
| Model name | \# params, M | \# vocab, K | Mem., MB |
|---|---|---|---|
| `rubert-base-cased-conversational` | 177.9 | 120 | 679 |
| `distilrubert-base-cased-conversational` | 135.5 | 120 | 517 |
| `distilrubert-small-cased-conversational` | 107.1 | 120 | 409 |
| `cointegrated/rubert-tiny` | 11.8 | 30 | 46 |
| `cointegrated/rubert-tiny2` | 29.3 | 84 | 112 |
| `distilrubert-tiny-cased-conversational-v1` | 10.4 | 31 | 41 |
| `distilrubert-tiny-cased-conversational-5k` | **3.6** | 5 | **14** |
DistilRuBERT-tiny was trained for about 100 hrs. on 7 nVIDIA Tesla P100-SXM2.0 16Gb.
We used `PyTorchBenchmark` from `transformers` to evaluate model's performance and compare it with other pre-trained language models for Russian. All tests were performed on NVIDIA GeForce GTX 1080 Ti and Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
| Model name | Batch size | Seq len | Time, s || Mem, MB ||
|---|---|---|------||------||
| | | | CPU | GPU | CPU | GPU |
| `rubert-base-cased-conversational` | 16 | 512 | 5.283 | 0.1866 | 1550 | 1938 |
| `distilrubert-base-cased-conversational` | 16 | 512 | 2.335 | 0.0553 | 2177 | 2794 |
| `distilrubert-small-cased-conversational` | 16 | 512 | 0.802 | **0.0015** | 1541 | 1810 |
| `cointegrated/rubert-tiny` | 16 | 512 | 0.942 | 0.0022 | 1308 | 2088 |
| `cointegrated/rubert-tiny2` | 16 | 512 | 1.786 | 0.0023 | 3054 | 3848 |
| `distilrubert-tiny-cased-conversational-v1` | 16 | 512 | **0.374** | **0.002** | **714** | **1158** |
| `distilrubert-tiny-cased-conversational-5k` | 16 | 512 | **0.354** | **0.0018** | **664** | **1126** |
To evaluate model quality, we fine-tuned DistilRuBERT-tiny-5k on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian. The results could be found in the [paper](https://arxiv.org/abs/2205.02340) Table 4 as well as performance benchmarks and training details.
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
\[5\]: <https://habr.com/ru/post/562064/>, <https://huggingface.co/cointegrated/rubert-tiny> |
samroni/model_gpt | 3821d526bd9c83a658314707909ed089293b5220 | 2022-06-28T18:36:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | samroni | null | samroni/model_gpt | 193 | null | transformers | 3,671 | Entry not found |
KoboldAI/fairseq-dense-1.3B | 35812b21ae79688d634c285fb617640b4e5a9eda | 2022-02-01T22:50:09.000Z | [
"pytorch",
"xglm",
"text-generation",
"transformers"
] | text-generation | false | KoboldAI | null | KoboldAI/fairseq-dense-1.3B | 192 | 1 | transformers | 3,672 | Entry not found |
congcongwang/gpt2_medium_fine_tuned_coder | 636a8814222562b9b5517bada923fee23b8a73f6 | 2021-05-21T15:06:28.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | congcongwang | null | congcongwang/gpt2_medium_fine_tuned_coder | 192 | 1 | transformers | 3,673 | Entry not found |
facebook/s2t-small-mustc-en-it-st | 90f09c8fb0ba436e3ce1c1d5dbe9fb14a1b4cd7e | 2022-02-07T15:01:15.000Z | [
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"en",
"it",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"transformers",
"audio",
"speech-translation",
"license:mit"
] | automatic-speech-recognition | false | facebook | null | facebook/s2t-small-mustc-en-it-st | 192 | 1 | transformers | 3,674 | ---
language:
- en
- it
datasets:
- mustc
tags:
- audio
- speech-translation
- automatic-speech-recognition
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# S2T-SMALL-MUSTC-EN-IT-ST
`s2t-small-mustc-en-it-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Italian text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-it-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-it-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-it-st is trained on English-Italian subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-it (BLEU score): 22.7
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
won/DialoGPT-small-harrypotter | 4701f16a4d0c0c19ad226e4506e01f335c3e2100 | 2022-01-15T11:58:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | won | null | won/DialoGPT-small-harrypotter | 192 | null | transformers | 3,675 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
yechen/question-answering-chinese | 46beedecafca28a1aeb696b18408e1eb76262c87 | 2021-05-20T09:25:57.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"zh",
"transformers",
"autotrain_compatible"
] | question-answering | false | yechen | null | yechen/question-answering-chinese | 192 | null | transformers | 3,676 | ---
language: zh
---
|
projecte-aina/roberta-base-ca-v2-cased-qa | 0ab7044d72b9cd2cd91b869ad9854026189faab1 | 2022-07-25T06:50:23.000Z | [
"pytorch",
"roberta",
"question-answering",
"ca",
"dataset:projecte-aina/catalanqa",
"dataset:projecte-aina/xquad-ca",
"arxiv:1907.11692",
"transformers",
"catalan",
"qa",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | projecte-aina | null | projecte-aina/roberta-base-ca-v2-cased-qa | 192 | null | transformers | 3,677 | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "qa"
datasets:
- "projecte-aina/catalanqa"
- "projecte-aina/xquad-ca"
model-index:
- name: roberta-base-ca-v2-cased-qa
results:
- task:
type: question-answering
dataset:
type: projecte-aina/catalanqa
name: CatalanQA
metrics:
- name: F1
type: f1
value: 0.8950
- task:
type: question-answering
dataset:
type: projecte-aina/xquad-ca
name: XQuAD-Ca
metrics:
- name: F1
type: f1
value: 0.7364
metrics:
- "f1"
- "exact match"
widget:
- text: "Quan va començar el Super3?"
context: "El Super3 o Club Super3 és un univers infantil català creat a partir d'un programa emès per Televisió de Catalunya des del 1991. Està format per un canal de televisió, la revista Súpers!, la Festa dels Súpers i un club que té un milió i mig de socis."
- text: "Quants eren els germans Marx?"
context: "Els germans Marx van ser un grup de còmics dels Estats Units que originàriament estava compost per cinc germans (entre parèntesis els noms artístics): Leonard (Chico), Adolph (Harpo), Julius (Groucho), Milton (Gummo) i Herbert (Zeppo)."
- text: "On van ser els Jocs Olímpics de 1992?"
context: "Els Jocs Olímpics d'estiu de 1992, oficialment Jocs Olímpics de la XXV Olimpíada, es van celebrar a la ciutat de Barcelona entre els dies 25 de juliol i 9 d'agost de 1992. Hi participaren 9.356 atletes (6.652 homes i 2.704 dones) de 169 comitès nacionals, que competiren en 32 esports i 286 especialitats."
- text: "Qui va dissenyar la Sagrada Família?"
context: "El Temple Expiatori de la Sagrada Família, conegut habitualment com la Sagrada Família, és una basílica catòlica situada a la ciutat de Barcelona. És un dels exemples més coneguts del modernisme català i un edifici únic al món, que ha esdevingut tot un símbol de la ciutat. Obra inacabada de l'arquitecte català Antoni Gaudí, és al barri de la Sagrada Família, al districte de l'Eixample de la ciutat."
- text: "Quin és el tercer volcà més gran de la Terra?"
context: "El Teide (o Pic del Teide) és un estratovolcà i muntanya de Tenerife, Illes Canàries (28.27 N, 16.6 O). Amb una altitud de 3718 m sobre el nivell del mar i amb aproximadament uns 7000 m sobre el llit marí adjacent, és la muntanya més alta d'Espanya, la muntanya més alta de totes les illes atlàntiques i el tercer volcà més gran de la Terra."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Question Answering.
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
The **roberta-base-ca-v2-cased-qa** is a Question Answering (QA) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended Uses and Limitations
**roberta-base-ca-v2-cased-qa** model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases.
## How to Use
Here is how to use this model:
```python
from transformers import pipeline
nlp = pipeline("question-answering", model="projecte-aina/roberta-base-ca-v2-cased-qa")
text = "Quan va començar el Super3?"
context = "El Super3 o Club Super3 és un univers infantil català creat a partir d'un programa emès per Televisió de Catalunya des del 1991. Està format per un canal de televisió, la revista Súpers!, la Festa dels Súpers i un club que té un milió i mig de socis."
qa_results = nlp(text, context)
print(qa_results)
```
## Training
### Training data
We used the QA dataset in Catalan called [CatalanQA](https://huggingface.co/datasets/projecte-aina/catalanqa) for training and evaluation, and the [XQuAD-ca](https://huggingface.co/datasets/projecte-aina/xquad-ca) test set for evaluation.
### Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and Metrics
This model was finetuned maximizing F1 score.
### Evaluation results
We evaluated the _roberta-base-ca-v2-cased-qa_ on the CatalanQA and XQuAD-ca test sets against standard multilingual and monolingual baselines:
| Model | CatalanQA (F1/EM) | XQuAD-Ca (F1/EM) |
| ------------|:-------------:| -----:|
| roberta-base-ca-v2-cased-qa | **89.50**/76.63 | **73.64/55.42** |
| roberta-base-ca-cased-qa | 89.17/**77.14** | 69.20/51.47 |
| mBERT | 86.90/74.19 | 68.79/50.80 |
| XLM-RoBERTa | 88.17/75.93 | 72.55/54.16 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Contributions
[N/A] |
dim/dialogpt-medium-persona-chat | d857cbe74c45efede87b707cd4ffc3e2dd851baa | 2022-07-12T11:39:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | dim | null | dim/dialogpt-medium-persona-chat | 192 | null | transformers | 3,678 | Entry not found |
codeparrot/codeparrot-small-code-to-text | 28a105e14da86230e3b92e6c6f55d2d398083208 | 2022-07-19T15:46:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"code",
"dataset:codeparrot/codeparrot-clean",
"dataset:codeparrot/github-jupyter-code-to-text",
"transformers",
"generation",
"license:apache-2.0"
] | text-generation | false | codeparrot | null | codeparrot/codeparrot-small-code-to-text | 192 | null | transformers | 3,679 | ---
language:
- code
license: apache-2.0
tags:
- code
- gpt2
- generation
datasets:
- "codeparrot/codeparrot-clean"
- "codeparrot/github-jupyter-code-to-text"
---
# CodeParrot 🦜 small for text-t-code generation
This model is [CodeParrot-small](https://huggingface.co/codeparrot/codeparrot-small) (from `branch megatron`) Fine-tuned on [github-jupyter-code-to-text](https://huggingface.co/datasets/codeparrot/github-jupyter-code-to-text), a dataset where the samples are a succession of Python code and its explanation as a docstring, originally extracted from Jupyter notebooks parsed in this [dataset](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed). |
SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask_finetune | f692aced9d13bbf9213a1418bd1177a73480a28f | 2021-06-23T04:24:36.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask_finetune | 191 | null | transformers | 3,680 | ---
tags:
- summarization
widget:
- text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
---
# CodeTrans model for code documentation generation java
Pretrained model on programming language java using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the java function/method.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/java/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SkolkovoInstitute/roberta-base-formality-ranker-v1 | 507700d5f40abdcee951bd2f859d48aed67367ba | 2022-07-20T21:29:23.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:GYAFC",
"dataset:Pavlick-Tetreault-2016",
"transformers",
"formality"
] | text-classification | false | SkolkovoInstitute | null | SkolkovoInstitute/roberta-base-formality-ranker-v1 | 191 | null | transformers | 3,681 | ---
language:
- en
tags:
- formality
datasets:
- GYAFC
- Pavlick-Tetreault-2016
---
The model has been trained [here](https://git.mts.ai/ai/ml_lab/skoltech-nlp_lab/skoltech/task_oriented_TST/-/blob/main/transfer/formality_ranker_v1.ipynb) to predict for English sentences, whether they are formal or informal.
Base model: `roberta-base`
Datasets: [GYAFC](https://github.com/raosudha89/GYAFC-corpus) from [Rao and Tetreault, 2018](https://aclanthology.org/N18-1012) and [online formality corpus](http://www.seas.upenn.edu/~nlp/resources/formality-corpus.tgz) from [Pavlick and Tetreault, 2016](https://aclanthology.org/Q16-1005).
Data augmentation: changing texts to upper or lower case; removing all punctuation, adding dot at the end of a sentence. It was applied because otherwise the model is over-reliant on punctuation and capitalization and does not pay enough attention to other features.
Loss: binary classification (on GYAFC), in-batch ranking (on PT data).
Performance metrics on the validation data:
| dataset | ROC AUC | accuracy | Spearman R|
| - | - | - | - |
| GYAFC | 0.9779 | 0.9087 | 0.8233 |
| GYAFC normalized (lowercase + remove punct.) | 0.9234 | 0.8218| 0.7294 |
| P&T subset | Spearman R |
| - | - |
news | 0.4003
answers | 0.7500
blog | 0.7334
email | 0.7606
|
readerbench/RoGPT2-large | 5d3292f4fb85e0dd83e1a31399f766b520c9b811 | 2021-07-22T11:23:50.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"ro",
"transformers"
] | text-generation | false | readerbench | null | readerbench/RoGPT2-large | 191 | null | transformers | 3,682 | Model card for RoGPT2-large
---
language:
- ro
---
# RoGPT2: Romanian GPT2 for text generation
All models are available:
* [RoBERT-base](https://huggingface.co/readerbench/RoGPT2-base)
* [RoBERT-medium](https://huggingface.co/readerbench/RoGPT2-medium)
* [RoBERT-large](https://huggingface.co/readerbench/RoGPT2-large)
For code and evaluation check out [GitHub](https://github.com/readerbench/RoGPT2).
#### How to use
```python
# TensorFlow
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('readerbench/RoGPT2-large')
model = TFAutoModelForCausalLM.from_pretrained('readerbench/RoGPT2-large')
inputs = tokenizer.encode("Este o zi de vara", return_tensors='tf')
text = model.generate(inputs, max_length=1024, no_repeat_ngram_size=2)
print(tokenizer.decode(text[0]))
# PyTorch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('readerbench/RoGPT2-large')
model = AutoModelForCausalLM.from_pretrained('readerbench/RoGPT2-large')
inputs = tokenizer.encode("Este o zi de vara", return_tensors='pt')
text = model.generate(inputs, max_length=1024, no_repeat_ngram_size=2)
print(tokenizer.decode(text[0]))
```
## Training
---
### Corpus Statistics
| Corpus | Total size | Number of words | Number of sentences |
|:------:|:----------:|:---------------:|:-------------------:|
|OSCAR| 11.54 GB | 1745M | 48.46M |
|Wiki-Ro | 0.46 GB | 68M | 1.79M |
|Debates | 0.5 GB | 73M | 3.61M |
|Books | 4.37 GB | 667M | 37.39M |
|News | 0.15 GB | 23M | 0.77M |
### Training Statistics
| Version | Number of parameters | Number of epoch | Duration of an epoch | Context size | Batch size | PPL |
|:-------:|:--------------------:|:---------------:|:--------------------:|:----------:|:----------:|:---:|
| Base | 124M | 15 | 7h | 1024 | 72 | 22.96 |
| Medium | 354M | 10 | 22h | 1024 | 24 | 17.64 |
| Large | 774M | 5 | **45h** | 512 | 16 | **16.77**|
## Evaluation
---
### 1. MOROCO
| Model | Dialect | Md to Ro | Ro to Md |
|:-----------------:|:-------:|:--------:|:--------:|
| KRR + SK | 94.06 | 67.59 | 75.47 |
| BERT-base-ro | 95.98 | 69.90 | 78.08 |
| RoBERT-small | 95.76 | 69.05 | 80.15 |
| RoBERT-base |**97.24**| 68.80 | 82.37 |
| RoBERT-large | 97.21 | 69.50 | **83.26**|
| RoGPT2-base | 96.69 | 69.82 | 77.55 |
| RoGPT2-medium | 96.42 | 69.77 | 80.51 |
| RoGPT2-large | 96.93 |**71.07** | 82.56 |
### 2. LaRoSeDa
| Model | Binary: Accuracy | Binary: F1-Score | Multi-Class: Accuracy | Multi-Class: F1-Score |
|:------------:|:----------------:|:----------------:|:---------------------:|:---------------------:|
|BERT-base-ro | 98.07 | 97.94 | - |79.61 |
| RoDiBERT |**98.40** |**98.31** | - |83.01 |
| RoBERT-small | 97.44 | 97.43 | 89.30 |84.23 |
| RoBERT-base | 98.27 | 98.26 | 90.59 |86.27 |
| RoBERT-large | 98.20 | 98.19 |**90.93** |**86.63** |
| RoGPT2-base | 97.89 | 97.88 |89.65 |84.68 |
|RoGPT2-medium | 98.03 |98.04 | 90.29 | 85.37 |
| RoGPT2-large | 98.06 |98.07 | 90.26 | 84.89 |
### 3. RoSTS
| Model | Spearman dev-set | Spearman test-set | Pearson dev-set | Pearson test-set |
|:------------:|:----------------:|:-----------------:|:---------------:|:----------------:|
|BERT-base-ro | 84.26 | 80.86 | 84.59 | 81.59 |
|RoDiBERT | 77.07 | 71.47 | 77.13 | 72.25 |
|RoBERT-small | 82.06 | 78.06 | 81.66 | 78.49 |
|RoBERT-base | 84.93 | 80.39 | 85.03 | 80.39 |
|RoBERT-large |**86.25** |**83.15** |**86.58** |**83.76** |
|RoGPT2-base | 83.51 | 79.77 | 83.74 | 80.56 |
|RoGPT2-medium | 85.75 | 82.25 | 86.04 | 83.16 |
|RoGPT2-large | 85.70 | 82.64 | 86.14 | 83.46 |
### 4. WMT16
| Model | Decoder method | Ro-En | En-Ro |
|:------------:|:--------------:|:------:|:------:|
|mBART | - |**38.5**|**38.5**|
|OpenNMT | - | - | 24.7 |
|RoGPT2-base |Greedy | 30.37 | 20.27 |
|RoGPT2-base |Beam-search-4 | 31.26 | 22.31 |
|RoGPT2-base |Beam-search-8 | 31.39 | 22.95 |
|RoGPT2-medium |Greedy | 32.48 | 22.18 |
|RoGPT2-medium |Beam-search-4 | 34.08 | 24.03 |
|RoGPT2-medium |Beam-search-8 | 34.16 | 24.13 |
|RoGPT2-large |Greedy | 33.69 | 23.31 |
|RoGPT2-large |Beam-search-4 |34.40 |24.23 |
|RoGPT2-large |Beam-search-8 |34.51 |24.32 |
### 5. XQuAD
| Model |Decoder method | EM | F1-Score |
|:------------:|:-------------:|:-----:|:--------:|
|BERT-base-ro | - | 47.89 | 63.74 |
|RoDiBERT | - | 21.76 | 34.57 |
|RoBERT-small | - | 30.84 | 45.17 |
|RoBERT-base | - | 53.52 | 70.04 |
|RoBERT-large | - | 55.46 | 69.64 |
|mBERT | - | 59.9 | 72.7 |
|XLM-R Large | - |**69.7**|**83.6**|
|RoGPT2-base | Greedy | 23.69 | 35.97 |
|RoGPT2-base | Beam-search-4 | 24.11 | 35.27 |
|RoGPT2-medium | Greedy | 29.66 | 44.74 |
|RoGPT2-medium | Beam-search-4 | 31.59 | 45.32 |
|RoGPT2-large | Greedy | 29.74 | 42.98 |
|RoGPT2-large | Beam-search-4 | 29.66 | 43.05 |
|RoGPT2-base-en-ro | Greedy | 23.86 | 34.27 |
|RoGPT2-base-en-ro | Beam-search-4 | 25.04 | 34.51 |
|RoGPT2-medium-en-ro | Greedy | 27.05 | 39.75 |
|RoGPT2-medium-en-ro | Beam-search-4 | 27.64 | 39.11 |
|RoGPT2-large-en-ro | Greedy | 28.40 | 39.79 |
|RoGPT2-large-en-ro | Beam-search-4 | 28.73 | 39.71 |
|RoGPT2-large-en-ro-mask | Greedy | 31.34 | 44.71 |
|RoGPT2-large-en-ro-mask| Beam-search-4 | 31.59 | 43.53 |
### 6. Wiki-Ro: LM
| Model | PPL dev | PPL test |
|:------------:|:-------:|:--------:|
|BERT-base-ro | 29.0897 | 28.0043|
|RoGPT2-base | 34.3795 | 33.7460|
|RoGPT2-medium | 23.7879 | 23.4581|
|RoGPT2-large | **21.7491** | **21.5200** |
### 7. RoGEC
| Model | Decoder mothod | P | R | F<sub>0.5</sub> |
|:-----:|:--------------:|:---:|:---:|:------:|
|Transformer-tiny | Beam-search | 53.53 | 26.36 | 44.38 |
|Transformer-base Finetuning | Beam-search | 56.05 | 46.19 | 53.76 |
|Transformer-base Finetuning | Beam-search-LM | 50.68 | 45.39 | 49.52 |
|Transformer-base Finetuning | Beam-search-norm-LM | 51.06 | 45.43 | 49.83 |
|RoGPT2-base | Greedy | 59.02 | 49.35 | 56.80 |
|RoGPT2-base | Beam-search-4 | 65.23 | 49.26 | 61.26 |
|RoGPT2-base |Beam-search-8 | 65.88 | 49.64 | 61.84 |
|RoGPT2-medium | Greedy | 69.97 | 57.94 | 67.18 |
|RoGPT2-medium | Beam-search-4 | **72.46** | **57.99** | **69.01** |
|RoGPT2-medium | Beam-search-8 | 72.24 | 57.69 | 68.77 |
|RoGP2-large | Greedy | 61.90 | 49.09 | 58.83 |
|RoGP2-large | Beam-search-4 | 65.24 | 49.43 | 61.32 |
|RoGP2-large | Beam-search-8 | 64.96 | 49.22 | 61.06 |
|RoGPT2-base* | Greedy | 68.67 | 49.60 | 63.77 |
|RoGPT2-base* | Beam-search-4 | 71.16 | 50.53 | 65.79 |
|RoGPT2-base* | Beam-search-8 | 71.68 | 50.65 | 66.18 |
|RoGPT2-medium* | Greedy | 58.21 | 43.32 | 54.47 |
|RoGPT2-medium* | Beam-search-4 | 68.31 | 43.78 | 61.43 |
|RoGPT2-medium* | Beam-search-8 | 68.68 | 43.99 | 61.75 |
|RoGPT2-large* | Greedy | 64.86 | 41.30 | 58.22 |
|RoGPT2-large* | Beam-search-4 | 65.57 | 41.00 | 58.55 |
|RoGPT2-large* | Beam-search-8 | 65.44 | 41.09 | 58.50 |
**__Note__**: * the models were trained using the dataset of 3,000,000 artificially generated pairs
## Acknowledgments
---
Research supported with [Cloud TPUs](https://cloud.google.com/tpu/) from Google's [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc)
## How to cite
---
Niculescu, M. A., Ruseti, S., and Dascalu, M. (submitted). RoGPT2: Romanian GPT2 for Text Generation
|
StanfordAIMI/stanford-deidentifier-only-i2b2 | 45c5edfa4dd56d2d65a7b23eb2f7e4aee4a44584 | 2022-07-18T03:49:38.000Z | [
"pytorch",
"bert",
"en",
"dataset:radreports",
"transformers",
"token-classification",
"sequence-tagger-model",
"pubmedbert",
"uncased",
"radiology",
"biomedical",
"license:mit"
] | token-classification | false | StanfordAIMI | null | StanfordAIMI/stanford-deidentifier-only-i2b2 | 191 | 1 | transformers | 3,683 | ---
widget:
- text: "PROCEDURE: Chest xray. COMPARISON: last seen on 1/1/2020 and also record dated of March 1st, 2019. FINDINGS: patchy airspace opacities. IMPRESSION: The results of the chest xray of January 1 2020 are the most concerning ones. The patient was transmitted to another service of UH Medical Center under the responsability of Dr. Perez. We used the system MedClinical data transmitter and sent the data on 2/1/2020, under the ID 5874233. We received the confirmation of Dr Perez. He is reachable at 567-493-1234."
- text: "Dr. Curt Langlotz chose to schedule a meeting on 06/23."
tags:
- token-classification
- sequence-tagger-model
- pytorch
- transformers
- pubmedbert
- uncased
- radiology
- biomedical
datasets:
- radreports
language:
- en
license: mit
---
Stanford de-identifier was trained on a variety of radiology and biomedical documents with the goal of automatising the de-identification process while reaching satisfactory accuracy for use in production. Manuscript in-proceedings. |
Cameron/BERT-rtgender-opgender-annotations | 30c103f3f0b41b05c9b1814be20f59c87b7fbb42 | 2021-05-18T17:34:57.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Cameron | null | Cameron/BERT-rtgender-opgender-annotations | 190 | null | transformers | 3,684 | Entry not found |
FPTAI/velectra-base-discriminator-cased | f594fee501933d92d8c13285c969495681d0cc00 | 2020-09-30T03:52:16.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | FPTAI | null | FPTAI/velectra-base-discriminator-cased | 190 | null | transformers | 3,685 | Entry not found |
Harveenchadha/vakyansh-wav2vec2-indian-english-enm-700 | 6889649cf67fd629a47fbdd9e11560626d22ac64 | 2021-08-02T18:42:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/vakyansh-wav2vec2-indian-english-enm-700 | 190 | 3 | transformers | 3,686 | Entry not found |
arampacha/wav2vec2-xls-r-1b-hy | 5a34466c5a07b932732d804e02d2a29ea608bebd | 2022-03-23T18:34:38.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hy-AM",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"hy",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | arampacha | null | arampacha/wav2vec2-xls-r-1b-hy | 190 | 1 | transformers | 3,687 | ---
language:
- hy-AM
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- hy
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-1b-hy-cv
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice hy-AM
args: hy-AM
metrics:
- type: wer
value: 10.811865729898516
name: WER LM
- type: cer
value: 2.2205361659079412
name: CER LM
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: hy
metrics:
- name: Test WER
type: wer
value: 18.219363037089988
- name: Test CER
type: cer
value: 7.075988867335752
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/HY/NOIZY_STUDENT_4/ - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1693
- Wer: 0.2373
- Cer: 0.0429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 842
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.255 | 7.24 | 500 | 0.2978 | 0.4294 | 0.0758 |
| 1.0058 | 14.49 | 1000 | 0.1883 | 0.2838 | 0.0483 |
| 0.9371 | 21.73 | 1500 | 0.1813 | 0.2627 | 0.0457 |
| 0.8999 | 28.98 | 2000 | 0.1693 | 0.2373 | 0.0429 |
| 0.8814 | 36.23 | 2500 | 0.1760 | 0.2420 | 0.0435 |
| 0.8364 | 43.47 | 3000 | 0.1765 | 0.2416 | 0.0419 |
| 0.8019 | 50.72 | 3500 | 0.1758 | 0.2311 | 0.0398 |
| 0.7665 | 57.96 | 4000 | 0.1745 | 0.2240 | 0.0399 |
| 0.7376 | 65.22 | 4500 | 0.1717 | 0.2190 | 0.0385 |
| 0.716 | 72.46 | 5000 | 0.1700 | 0.2147 | 0.0382 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
monologg/koelectra-base-v3-goemotions | 9b9385c0190252577bc51bf222426e726fea51d0 | 2021-02-09T14:40:17.000Z | [
"pytorch",
"electra",
"transformers"
] | null | false | monologg | null | monologg/koelectra-base-v3-goemotions | 190 | null | transformers | 3,688 | Entry not found |
mrm8488/roberta-base-1B-1-finetuned-squadv1 | 4a0a7703a4ca3bc57af44753073ccfcb03b45a1f | 2021-05-20T18:26:13.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"en",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/roberta-base-1B-1-finetuned-squadv1 | 190 | null | transformers | 3,689 | ---
language: en
---
# RoBERTa-base (1B-1) + SQuAD v1 ❓
[roberta-base-1B-1](https://huggingface.co/nyu-mll/roberta-base-1B-1) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
RoBERTa Pretrained on Smaller Datasets
[NYU Machine Learning for Language](https://huggingface.co/nyu-mll) pretrained RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). They released 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: They combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
## Details of the downstream task (Q&A) - Dataset 📚
**S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type roberta \
--model_name_or_path 'nyu-mll/roberta-base-1B-1' \
--do_eval \
--do_train \
--do_lower_case \
--train_file /content/dataset/train-v1.1.json \
--predict_file /content/dataset/dev-v1.1.json \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/output \
--overwrite_output_dir \
--save_steps 1000
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **72.62** |
| **F1** | **82.19** |
```json
{
'exact': 72.62062440870388,
'f1': 82.19430877136834,
'total': 10570,
'HasAns_exact': 72.62062440870388,
'HasAns_f1': 82.19430877136834,
'HasAns_total': 10570,
'best_exact': 72.62062440870388,
'best_exact_thresh': 0.0,
'best_f1': 82.19430877136834,
'best_f1_thresh': 0.0
}
```
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/roberta-base-1B-1-finetuned-squadv1')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'What has been discovered by scientists from China ?'
})
# Output:
{'answer': 'A new strain of flu', 'end': 19, 'score': 0.04702283976040074, 'start': 0}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
facebook/data2vec-vision-large-ft1k | 9933efaca13b79863d36579c9a339c84cb0e9eca | 2022-05-03T15:22:49.000Z | [
"pytorch",
"tf",
"data2vec-vision",
"image-classification",
"dataset:imagenet",
"dataset:imagenet-1k",
"arxiv:2202.03555",
"arxiv:2106.08254",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/data2vec-vision-large-ft1k | 190 | 2 | transformers | 3,690 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-1k
---
# Data2Vec-Vision (large-sized model, fine-tuned on ImageNet-1k)
BEiT model pre-trained in a self-supervised fashion and fine-tuned on ImageNet-1k (1,2 million images, 1000 classes) at resolution 224x224. It was introduced in the paper [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli and first released in [this repository](https://github.com/facebookresearch/data2vec_vision/tree/main/beit).
Disclaimer: The team releasing Facebook team did not write a model card for this model so this model card has been written by the Hugging Face team.
## Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
## Abstract
*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because
they were developed with a single modality in
mind. To get us closer to general self-supervised
learning, we present data2vec, a framework that
uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data
based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific
targets such as words, visual tokens or units of
human speech which are local in nature, data2vec
predicts contextualized latent representations that
contain information from the entire input. Experiments on the major benchmarks of speech
recognition, image classification, and natural language understanding demonstrate a new state of
the art or competitive performance to predominant approaches.*
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=data2vec-vision) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import BeitFeatureExtractor, Data2VecVisionForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('facebook/data2vec-vision-large-ft1k')
model = Data2VecVisionForImageClassification.from_pretrained('facebook/data2vec-vision-large-ft1k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The BEiT model was pretrained and fine-tuned on [ImageNet-1k](http://www.image-net.org/), a dataset consisting of 1,2 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to the [original paper](https://arxiv.org/abs/2106.08254) and the [original codebase](https://github.com/facebookresearch/data2vec_vision/tree/main/beit)
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution. Of course, increasing the model size will result in better performance.
We evaluated the model on `ImageNet1K` and got top-1 accuracy = **86.50** while in the original paper it was reported top-1 accuracy = 86.2.
If you want to reproduce our evaluation process you can use [This Colab Notebook](https://colab.research.google.com/drive/1xl8hcdoDYVt5aSk1AYH-nLm1Sgvhac4L?usp=sharing)
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.03555,
doi = {10.48550/ARXIV.2202.03555},
url = {https://arxiv.org/abs/2202.03555},
author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
jorge-henao/gpt2-small-spanish-historias-conflicto-col | 0cd442c9fd4fb23202c25f79718ababf200f8ac6 | 2022-07-10T17:30:28.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | jorge-henao | null | jorge-henao/gpt2-small-spanish-historias-conflicto-col | 190 | null | transformers | 3,691 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-spanish-historias-conflicto-col
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-spanish-historias-conflicto-col
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8576 | 2.49 | 30000 | 1.2388 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SajjadAyoubi/distil-bigbird-fa-zwnj | 98fd06440980957e6428dc823e16d56593fb805c | 2021-10-28T13:14:34.000Z | [
"pytorch",
"big_bird",
"fill-mask",
"arxiv:1810.04805",
"arxiv:2005.12515",
"arxiv:2007.14062",
"transformers",
"autotrain_compatible"
] | fill-mask | false | SajjadAyoubi | null | SajjadAyoubi/distil-bigbird-fa-zwnj | 189 | null | transformers | 3,692 | <span align="center">
<a href="https://huggingface.co/SajjadAyoubi/"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=SajjadAyoubi&color=yellow"></a>
<a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a>
</span>
# ParsBigBird: Persian Bert For **Long-Range** Sequences
The [Bert](https://arxiv.org/abs/1810.04805) and [ParsBert](https://arxiv.org/abs/2005.12515) algorithms can handle texts with token lengths of up to 512, however, many tasks such as summarizing and answering questions require longer texts. In our work, we have trained the [BigBird](https://arxiv.org/abs/2007.14062) model for the Persian language to process texts up to 4096 in the Farsi (Persian) language using sparse attention.
## Evaluation: 🌡️
We have evaluated the model on three tasks with different sequence lengths
| Name | Params | SnappFood (F1) | Digikala Magazine(F1) | PersianQA (F1) |
| :--------------------------------------------------------------: | :----: | :-----------------: | :---------------: | :--------------: |
| [distil-bigbird-fa-zwnj](https://github.com/sajjjadayobi/ParsBigBird) | 78M | 85.43% | **94.05%** | **73.34%** |
| [bert-base-fa](https://github.com/hooshvare/parsbert) | 118M | **87.98%** | 93.65% | 70.06% |
- Despite being as big as distill-bert, the model performs equally well as ParsBert and is much better on PersianQA which requires much more context
- This evaluation was based on `max_lentgh=2048` (It can be changed up to 4096)
## How to use❓
### As Contextualized Word Embedding
```python
from transformers import BigBirdModel, AutoTokenizer
MODEL_NAME = "SajjadAyoubi/distil-bigbird-fa-zwnj"
# by default its in `block_sparse` block_size=32
model = BigBirdModel.from_pretrained(MODEL_NAME, block_size=32)
# you can use full attention like the following: use this when input isn't longer than 512
model = BigBirdModel.from_pretrained(MODEL_NAME, attention_type="original_full")
text = "😃 امیدوارم مدل بدردبخوری باشه چون خیلی طول کشید تا ترین بشه"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens) # contextualized embedding
```
### As Fill Blank
```python
from transformers import pipeline
MODEL_NAME = 'SajjadAyoubi/distil-bigbird-fa-zwnj'
fill = pipeline('fill-mask', model=MODEL_NAME, tokenizer=MODEL_NAME)
results = fill('تهران پایتخت [MASK] است.')
print(results[0]['token_str'])
>>> 'ایران'
```
## Pretraining details: 🔭
This model was pretrained using a masked language model (MLM) objective on the Persian section of the Oscar dataset. Following the original BERT training, 15% of tokens were masked. This was first described in this [paper](https://arxiv.org/abs/2007.14062) and released in this [repository](https://github.com/google-research/bigbird). Documents longer than 4096 were split into multiple documents, while documents much smaller than 4096 were merged using the [SEP] token. Model is warm started from `distilbert-fa`’s [checkpoint](https://huggingface.co/HooshvareLab/distilbert-fa-zwnj-base).
- For more details, you can take a look at config.json at the model card in 🤗 Model Hub
## Fine Tuning Recommendations: 🐤
Due to the model's memory requirements, `gradient_checkpointing` and `gradient_accumulation` should be used to maintain a reasonable batch size. Considering this model isn't really big, it's a good idea to first fine-tune it on your dataset using Masked LM objective (also called intermediate fine-tuning) before implementing the main task. In block_sparse mode, it doesn't matter how many tokens are input. It just attends to 256 tokens. Furthermore, original_full should be used up to 512 sequence lengths (instead of block sparse).
### Fine Tuning Examples 👷♂️👷♀️
| Dataset | Fine Tuning Example |
| ------------------------------------- | ------------------------------------------------------------ |
| Digikala Magazine Text Classification | <a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a> |
## Contact us: 🤝
If you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us.
## Citation: ↩️
we didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below.
```bibtex
@misc{ParsBigBird,
author = {Ayoubi, Sajjad},
title = {ParsBigBird: Persian Bert For Long-Range Sequences},
year = 2021,
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/SajjjadAyobi/ParsBigBird}},
}
```
|
eduardofv/stsb-m-mt-es-distiluse-base-multilingual-cased-v1 | dfd98175cd8ebb4da3ebd46c9409f78d4c7e0b73 | 2021-07-06T18:06:30.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"es",
"dataset:stsb_multi_mt",
"sentence-transformers",
"sentence-similarity"
] | feature-extraction | false | eduardofv | null | eduardofv/stsb-m-mt-es-distiluse-base-multilingual-cased-v1 | 189 | null | sentence-transformers | 3,693 | ---
language: es
datasets:
- stsb_multi_mt
tags:
- sentence-similarity
- sentence-transformers
---
This is a test model that was fine-tuned using the Spanish datasets from [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) in order to understand and benchmark STS models.
## Model and training data description
This model was built taking `distiluse-base-multilingual-cased-v1` and training it on a Semantic Textual Similarity task using a modified version of the training script for STS from Sentece Transformers (the modified script is included in the repo). It was trained using the Spanish datasets from [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) which are the STSBenchmark datasets automatically translated to other languages using deepl.com. Refer to the dataset repository for more details.
## Intended uses & limitations
This model was built just as a proof-of-concept on STS fine-tuning using Spanish data and no specific use other than getting a sense on how this training works.
## How to use
You may use it as any other STS trained model to extract sentence embeddings. Check Sentence Transformers documentation.
## Training procedure
This model was trained using this [Colab Notebook](https://colab.research.google.com/drive/1ZNjDMFdy_lKhnD9BtbqzSbQ4LNz638ZA?usp=sharing)
## Evaluation results
Evaluating `distiluse-base-multilingual-cased-v1` on the Spanish test dataset before training results in:
```
2021-07-06 17:44:46 - EmbeddingSimilarityEvaluator: Evaluating the model on dataset:
2021-07-06 17:45:00 - Cosine-Similarity : Pearson: 0.7662 Spearman: 0.7583
2021-07-06 17:45:00 - Manhattan-Distance: Pearson: 0.7805 Spearman: 0.7772
2021-07-06 17:45:00 - Euclidean-Distance: Pearson: 0.7816 Spearman: 0.7778
2021-07-06 17:45:00 - Dot-Product-Similarity: Pearson: 0.6610 Spearman: 0.6536
```
While the fine-tuned version with the defaults of the training script and the Spanish training dataset results in:
```
2021-07-06 17:49:22 - EmbeddingSimilarityEvaluator: Evaluating the model on stsb-multi-mt-test dataset:
2021-07-06 17:49:24 - Cosine-Similarity : Pearson: 0.8265 Spearman: 0.8207
2021-07-06 17:49:24 - Manhattan-Distance: Pearson: 0.8131 Spearman: 0.8190
2021-07-06 17:49:24 - Euclidean-Distance: Pearson: 0.8129 Spearman: 0.8190
2021-07-06 17:49:24 - Dot-Product-Similarity: Pearson: 0.7773 Spearman: 0.7692
```
In our [STS Evaluation repository](https://github.com/eduardofv/sts_eval) we compare the performance of this model with other models from Sentence Transformers and Tensorflow Hub using the standard STSBenchmark and the 2017 STSBenchmark Task 3 for Spanish.
## Resources
- Training dataset [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt)
- Sentence Transformers [Semantic Textual Similarity](https://www.sbert.net/examples/training/sts/README.html)
- Check [sts_eval](https://github.com/eduardofv/sts_eval) for a comparison with Tensorflow and Sentence-Transformers models
- Check the [development environment to run the scripts and evaluation](https://github.com/eduardofv/ai-denv)
|
sultan/BioM-ELECTRA-Large-SQuAD2 | 7108315378d3adb4168a79035366e1b9a563f0b3 | 2021-08-06T22:27:10.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sultan | null | sultan/BioM-ELECTRA-Large-SQuAD2 | 189 | 1 | transformers | 3,694 | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
We fine-tuned BioM-ELECTRA-Large, which was pre-trained on PubMed Abstracts, on the SQuAD2.0 dataset. Fine-tuning the biomedical language model on the SQuAD dataset helps improve the score on the BioASQ challenge. If you plan to work with BioASQ or biomedical QA tasks, it's better to use this model over BioM-ELECTRA-Large. This model (TensorFlow version ) took the lead in the BioASQ9b-Factoid challenge (Batch 5) under the name of (UDEL-LAB2). To see the full details of BioASQ9B results, please check this link http://participants-area.bioasq.org/results/9b/phaseB/ ( you need to register).
Huggingface library doesn't implement Layer-Wise decay feature, which affects the performance on SQuAD task. The reported result of BioM-ELECTRA-SQuAD in our paper is 88.3 (F1) since we use ELECTRA open-source code with TF checkpoint, which uses Layer-Wise decay.
Training Script
```python
run_qa.py --model_name_or_path sultan/BioM-ELECTRA-Large-Discriminator \
--dataset_name squad_v2 \
--do_train \
--do_eval \
--dataloader_num_workers 20 \
--preprocessing_num_workers 20 \
--version_2_with_negative \
--num_train_epochs 2 \
--learning_rate 5e-5 \
--max_seq_length 512 \
--doc_stride 128 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 6 \
--per_device_eval_batch_size 128
--fp16 \
--fp16_opt_level O1 \
--logging_steps 50 \
--save_steps 1000 \
--overwrite_output_dir \
--output_dir out
```
Evaluation results on SQuAD2.0 Dev Dataset
```
exact = 84.33420365535248
f1 = 87.49354241889522
total = 11873
HasAns_exact = 80.43184885290148
HasAns_f1 = 86.75958656200127
HasAns_total = 5928
NoAns_exact = 88.22539949537426
NoAns_f1 = 88.22539949537426
NoAns_total = 5945
best_exact = 84.33420365535248
best_exact_thresh = 0.0
best_f1 = 87.49354241889522
best_f1_thresh = 0.0
epoch = 2.0
```
To reproduce results in Google Colab:
- Make sure you have GPU enabled.
- Clone and install required libraries through this code
!git clone https://github.com/huggingface/transformers
!pip3 install -e transformers
!pip3 install sentencepiece
!pip3 install -r /content/transformers/examples/pytorch/question-answering/requirements.txt
- Run this python code:
```python
python /content/transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path sultan/BioM-ELECTRA-Large-SQuAD2 \
--do_eval \
--version_2_with_negative \
--per_device_eval_batch_size 8 \
--dataset_name squad_v2 \
--overwrite_output_dir \
--fp16 \
--output_dir out
```
- You don't need to download the SQuAD2 dataset. The code will download it from the HuggingFace datasets hub.
- Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints.
- We added examples to fine-tune BioM-ELECTRA-Large on SQuAD and BioASQ7B using TensorFlow and TPU here https://github.com/salrowili/BioM-Transformers/tree/main/examples . In this example we show that we achieve 88.22 score in SQuAD2.0 since Tensor Flow code has Layer-wise decay feature.
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
uer/roberta-base-finetuned-jd-binary-chinese | b270d7d29fb2c4596908f2502a48a04cf459eb57 | 2022-02-20T07:57:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"zh",
"arxiv:1909.05658",
"arxiv:1708.02657",
"transformers"
] | text-classification | false | uer | null | uer/roberta-base-finetuned-jd-binary-chinese | 189 | 2 | transformers | 3,695 | ---
language: zh
widget:
- text: "这本书真的很不错"
---
# Chinese RoBERTa-Base Models for Text Classification
## Model description
This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://arxiv.org/abs/1909.05658). You can download the 5 Chinese RoBERTa-Base classification models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo) (in UER-py format), or via HuggingFace from the links below:
| Dataset | Link |
| :-----------: | :-------------------------------------------------------: |
| **JD full** | [**roberta-base-finetuned-jd-full-chinese**][jd_full] |
| **JD binary** | [**roberta-base-finetuned-jd-binary-chinese**][jd_binary] |
| **Dianping** | [**roberta-base-finetuned-dianping-chinese**][dianping] |
| **Ifeng** | [**roberta-base-finetuned-ifeng-chinese**][ifeng] |
| **Chinanews** | [**roberta-base-finetuned-chinanews-chinese**][chinanews] |
## How to use
You can use this model directly with a pipeline for text classification (take the case of roberta-base-finetuned-chinanews-chinese):
```python
>>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline
>>> model = AutoModelForSequenceClassification.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> text_classification("北京上个月召开了两会")
[{'label': 'mainland China politics', 'score': 0.7211663722991943}]
```
## Training data
5 Chinese text classification datasets are used. JD full, JD binary, and Dianping datasets consist of user reviews of different sentiment polarities. Ifeng and Chinanews consist of first paragraphs of news articles of different topic classes. They are collected by [Glyph](https://github.com/zhangxiangxiao/glyph) project and more details are discussed in corresponding [paper](https://arxiv.org/abs/1708.02657).
## Training procedure
Models are fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved. We use the same hyper-parameters on different models.
Taking the case of roberta-base-finetuned-chinanews-chinese
```
python3 run_classifier.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--train_path datasets/glyph/chinanews/train.tsv \
--dev_path datasets/glyph/chinanews/dev.tsv \
--output_model_path models/chinanews_classifier_model.bin \
--learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_text_classification_from_uer_to_huggingface.py --input_model_path models/chinanews_classifier_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{zhang2017encoding,
title={Which encoding is the best for text classification in chinese, english, japanese and korean?},
author={Zhang, Xiang and LeCun, Yann},
journal={arXiv preprint arXiv:1708.02657},
year={2017}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[jd_full]:https://huggingface.co/uer/roberta-base-finetuned-jd-full-chinese
[jd_binary]:https://huggingface.co/uer/roberta-base-finetuned-jd-binary-chinese
[dianping]:https://huggingface.co/uer/roberta-base-finetuned-dianping-chinese
[ifeng]:https://huggingface.co/uer/roberta-base-finetuned-ifeng-chinese
[chinanews]:https://huggingface.co/uer/roberta-base-finetuned-chinanews-chinese |
d0r1h/LEDBill | 714bfe06554698b4945e72ffb0ef406df0347c11 | 2022-07-12T08:11:45.000Z | [
"pytorch",
"led",
"text2text-generation",
"dataset:billsum",
"arxiv:2004.05150",
"transformers",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | d0r1h | null | d0r1h/LEDBill | 189 | null | transformers | 3,696 | ---
license: apache-2.0
datasets: billsum
tags:
- summarization
widget:
- text: "The people of the State of California do enact as follows: SECTIONHEADER Section 1170.02 is added to the Penal Code, to read: 1170.02. A prisoner is not eligible for resentence or recall pursuant to subdivision (e) of Section 1170 if he or she was convicted of first-degree murder if the victim was a peace officer, as defined in Section 830.1, 830.2, 830.3, 830.31, 830.32, 830.33, 830.34, 830.35, 830.36, 830.37, 830.4, 830.5, 830.6, 830.10, 830.11, or 830.12, who was killed while engaged in the performance of his or her duties, and the individual knew, or reasonably should have known, that the victim was a peace officer engaged in the performance of his or her duties, or the victim was a peace officer or a former peace officer under any of the above-enumerated sections, and was intentionally killed in retaliation for the performance of his or her official duties. SECTIONHEADER Section 3550 of the Penal Code is amended to read: 3550. Notwithstanding any other law, except as provided in subdivision (b), if the head physician of an institution in which a prisoner is incarcerated determines, as provided in this section, that the prisoner is permanently medically incapacitated with a medical condition that renders him or her permanently unable to perform activities of basic daily living, and results in the prisoner requiring 24-hour care, and that incapacitation did not exist at the time of sentencing, the prisoner shall be granted medical parole if the Board of Parole Hearings determines that the conditions under which he or she would be released would not reasonably pose a threat to public safety. This section does not alter or diminish the rights conferred under the Victims Bill of Rights Act of 2008 . Subdivision (a) does not apply to any of the following: A prisoner sentenced to death or life in prison without possibility of parole. A prisoner who is serving a sentence for which parole, pursuant to subdivision (a), is prohibited by any initiative statute. A prisoner who was convicted of first-degree murder if the victim was a peace officer, as defined in Section 830.1, 830.2, 830.3, 830.31, 830.32, 830.33, 830.34, 830.35, 830.36, 830.37, 830.4, 830.5, 830.6, 830.10, 830.11, or 830.12, who was killed while engaged in the performance of his or her duties, and the individual knew, or reasonably should have known, that the victim was a peace officer engaged in the performance of his or her duties, or the victim was a peace officer or a former peace officer under any of the above-enumerated sections, and was intentionally killed in retaliation for the performance of his or her official duties. When a physician employed by the Department of Corrections and Rehabilitation who is the primary care provider for a prisoner identifies a prisoner that he or she believes meets the medical criteria for medical parole specified in subdivision (a), the primary care physician shall recommend to the head physician of the institution where the prisoner is located that the prisoner be referred to the Board of Parole Hearings for consideration for medical parole. Within 30 days of receiving that recommendation, if the head physician of the institution concurs in the recommendation of the primary care physician, he or she shall refer the matter to the Board of Parole Hearings using a standardized form and format developed by the department, and if the head physician of the institution does not concur in the recommendation, he or she shall provide the primary care physician with a written explanation of the reasons for denying the referral. Notwithstanding any other provisions of this section, the prisoner or his or her family member or designee may independently request consideration for medical parole by contacting the head physician at the prison or the department. Within 30 days of receiving the request, the head physician of the institution shall, in consultation with the prisoners primary care physician, make a determination regarding whether the prisoner meets the criteria for medical parole as specified in subdivision (a) and, if the head physician of the institution determines that the prisoner satisfies the criteria set forth in subdivision (a), he or she shall refer the matter to the Board of Parole Hearings using a standardized form and format developed by the department. If the head physician of the institution does not concur in the recommendation, he or she shall provide the prisoner or his or her family member or designee with a written explanation of the reasons for denying the application. The Department of Corrections and Rehabilitation shall complete parole plans for inmates referred to the Board of Parole Hearings for medical parole consideration. The parole plans shall include, but not be limited to, the inmates plan for residency and medical care. Notwithstanding any other law, medical parole hearings shall be conducted by two-person panels consisting of at least one commissioner. In the event of a tie vote, the matter shall be referred to the full board for a decision. Medical parole hearings may be heard in absentia. Upon receiving a recommendation from the head physician of the institution where a prisoner is located for the prisoner to be granted medical parole pursuant to subdivision (c) or (d), the board, as specified in subdivision (f), shall make an independent judgment regarding whether the conditions under which the inmate would be released pose a reasonable threat to public safety, and make written findings related thereto. Notwithstanding any other law, the board or the Division of Adult Parole Operations shall have the authority to impose any reasonable conditions on prisoners subject to medical parole supervision pursuant to subdivision (a), including, but not limited to, the requirement that the parolee submit to electronic monitoring. As a further condition of medical parole, pursuant to subdivision (a), the parolee may be required to submit to an examination by a physician selected by the board for the purpose of diagnosing the parolees current medical condition. In the event such an examination takes place, a report of the examination and diagnosis shall be submitted to the board by the examining physician. If the board determines, based on that medical examination, that the persons medical condition has improved to the extent that the person no longer qualifies for medical parole, the board shall return the person to the custody of the department. Notwithstanding any other law establishing maximum periods for parole, a prisoner sentenced to a determinate term who is placed on medical parole supervision prior to the earliest possible release date and who remains eligible for medical parole, shall remain on medical parole, pursuant to subdivision (a), until that earliest possible release date, at which time the parolee shall commence serving that period of parole provided by, and under the provisions of, Chapter 8 of Title 1. Notwithstanding any other law establishing maximum periods for parole, a prisoner sentenced to an indeterminate term who is placed on medical parole supervision prior to the prisoners minimum eligible parole date, and who remains eligible for medical parole, shall remain on medical parole pursuant to subdivision (a) until that minimum eligible parole date, at which time the parolee shall be eligible for parole consideration under all other provisions of Chapter 8 of Title 1. The Department of Corrections and Rehabilitation shall, at the time a prisoner is placed on medical parole supervision pursuant to subdivision (a), ensure that the prisoner has applied for any federal entitlement programs for which the prisoner is eligible, and has in his or her possession a discharge medical summary, full medical records, parole medications, and all property belonging to the prisoner that was under the control of the department. Any additional records shall be sent to the prisoners forwarding address after release to health care-related parole supervision. The provisions for medical parole set forth in this title shall not affect an inmates eligibility for any other form of parole or release provided by law. (1) Notwithstanding any other law, the Department of Corrections and Rehabilitation shall give notice to the county of commitment and the proposed county of release, if that county is different than the county of commitment, of any medical parole hearing as described in subdivision (f), and of any medical parole release as described in subdivision (g). Notice shall be made at least 30 days, or as soon as feasible, prior to the time any medical parole hearing or medical parole release is scheduled for an inmate receiving medical parole consideration, regardless of whether the inmate is sentenced either determinately or indeterminately."
model-index:
- name: d0r1h/LEDBill
results:
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 38.6502
verified: true
- name: ROUGE-2
type: rouge
value: 18.5458
verified: true
- name: ROUGE-L
type: rouge
value: 25.6561
verified: true
- name: ROUGE-LSUM
type: rouge
value: 33.1575
verified: true
- name: loss
type: loss
value: 2.1305277347564697
verified: true
- name: gen_len
type: gen_len
value: 288.372
verified: true
---
# Longformer Encoder-Decoder (LED) fine-tuned on Billsum
This model is a fine-tuned version of [led-base-16384](https://huggingface.co/allenai/led-base-16384) on the [billsum](https://huggingface.co/datasets/billsum) dataset.
As described in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan, *led-base-16384* was initialized from [*bart-base*](https://huggingface.co/facebook/bart-base) since both models share the exact same architecture. To be able to process 16K tokens, *bart-base*'s position embedding matrix was simply copied 16 times.
## How to use
```Python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("d0r1h/LEDBill")
model = AutoModelForSeq2SeqLM.from_pretrained("d0r1h/LEDBill", return_dict_in_generate=True).to(device)
case = "......."
input_ids = tokenizer(case, return_tensors="pt").input_ids.to(device)
global_attention_mask = torch.zeros_like(input_ids)
global_attention_mask[:, 0] = 1
sequences = model.generate(input_ids,
global_attention_mask=global_attention_mask).sequences
summary = tokenizer.batch_decode(sequences,
skip_special_tokens=True)
```
## Evaluation results
When the model is used for summarizing Billsum documents(10 sample), it achieves the following results:
| Model | rouge1-f | rouge1-p | rouge2-f | rouge2-p | rougeL-f | rougeL-p |
|:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|
| LEDBill | **34** | **37** | **15** | **16** | **30** | **32** |
| led-base | 2 | 15 | 0 | 0 | 2 | 15 |
[This notebook](https://colab.research.google.com/drive/1iEEFbWeTGUSDesmxHIU2QDsPQM85Ka1K?usp=sharing) shows how *led* can effectively be used for downstream task such summarization.
|
Finnish-NLP/roberta-large-wechsel-finnish | d12d05c3dd60b277728436a5cea6e50262f2d749 | 2022-06-13T16:13:27.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1907.11692",
"arxiv:2112.06598",
"transformers",
"finnish",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Finnish-NLP | null | Finnish-NLP/roberta-large-wechsel-finnish | 188 | 1 | transformers | 3,697 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- roberta
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
widget:
- text: "Moikka olen <mask> kielimalli."
---
# RoBERTa large model trained with WECHSEL method for Finnish
Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective with WECHSEL method. RoBERTa was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta).
WECHSEL method (Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models) was introduced in [this paper](https://arxiv.org/abs/2112.06598) and first released in [this repository](https://github.com/CPJKU/wechsel).
This model is case-sensitive: it makes a difference between finnish and Finnish.
## Model description
Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs.
## WECHSEL method
Using the WECHSEL method, we first took the pretrained English [roberta-large](https://huggingface.co/roberta-large) model, changed its tokenizer with our Finnish tokenizer and initialized model's token embeddings such that they are close to semantically similar English tokens by utilizing multilingual static word embeddings (by fastText) covering English and Finnish. We were able to confirm the WECHSEL paper's findings that using this method you can save pretraining time and thus computing resources. To get idea of the WECHSEL method's training time savings you can check the table below illustrating the MLM evaluation accuracies during the pretraining compared to the [Finnish-NLP/roberta-large-finnish-v2](https://huggingface.co/Finnish-NLP/roberta-large-finnish-v2) which was trained from scratch:
| | 10k train steps | 100k train steps | 200k train steps | 270k train steps |
|------------------------------------------|------------------|------------------|------------------|------------------|
|Finnish-NLP/roberta-large-wechsel-finnish |37.61 eval acc |58.14 eval acc |61.60 eval acc |62.77 eval acc |
|Finnish-NLP/roberta-large-finnish-v2 |13.83 eval acc |55.87 eval acc |58.58 eval acc |59.47 eval acc |
Downstream finetuning text classification tests can be found from the end but there this model trained with WECHSEL method didn't significantly improve the downstream performances. However, based on tens of qualitative fill-mask task example tests we noticed that for fill-mask task this WECHSEL model significantly outperforms our other models trained from scratch.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Finnish-NLP/roberta-large-wechsel-finnish')
>>> unmasker("Moikka olen <mask> kielimalli.")
[{'sequence': 'Moikka olen hyvä kielimalli.',
'score': 0.07757357507944107,
'token': 763,
'token_str': ' hyvä'},
{'sequence': 'Moikka olen suomen kielimalli.',
'score': 0.05297883599996567,
'token': 3641,
'token_str': ' suomen'},
{'sequence': 'Moikka olen kuin kielimalli.',
'score': 0.03747279942035675,
'token': 523,
'token_str': ' kuin'},
{'sequence': 'Moikka olen suomalainen kielimalli.',
'score': 0.031031042337417603,
'token': 4966,
'token_str': ' suomalainen'},
{'sequence': 'Moikka olen myös kielimalli.',
'score': 0.026489052921533585,
'token': 505,
'token_str': ' myös'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish')
model = RobertaModel.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import RobertaTokenizer, TFRobertaModel
tokenizer = RobertaTokenizer.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish')
model = TFRobertaModel.from_pretrained('Finnish-NLP/roberta-large-wechsel-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training data
This Finnish RoBERTa model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 270k steps (a bit over 1 epoch, 512 batch size) with a sequence length of 128 and continuing for 180k steps (batch size 64) with a sequence length of 512. The optimizer used was Adafactor (to save memory). Learning rate was 2e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), learning rate warmup for 2500 steps and linear decay of the learning rate after.
## Evaluation results
Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model and to our previous [Finnish-NLP/roberta-large-finnish-v2](https://huggingface.co/Finnish-NLP/roberta-large-finnish-v2) and [Finnish-NLP/roberta-large-finnish](https://huggingface.co/Finnish-NLP/roberta-large-finnish) models:
| | Average | Yle News 128 length | Yle News 512 length | Eduskunta 128 length |
|------------------------------------------|----------|---------------------|---------------------|----------------------|
|Finnish-NLP/roberta-large-wechsel-finnish |88.19 |**94.91** |95.18 |74.47 |
|Finnish-NLP/roberta-large-finnish-v2 |88.17 |94.46 |95.22 |74.83 |
|Finnish-NLP/roberta-large-finnish |88.02 |94.53 |95.23 |74.30 |
|TurkuNLP/bert-base-finnish-cased-v1 |**88.82** |94.90 |**95.49** |**76.07** |
To conclude, this model didn't significantly improve compared to our previous models which were trained from scratch instead of using the WECHSEL method as in this model. This model is also slightly (~ 1%) losing to the [FinBERT (Finnish BERT)](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
Helsinki-NLP/opus-mt-ha-en | a42c55010565f550c110703534b61b32a37fda8c | 2021-09-09T21:59:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ha",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ha-en | 188 | null | transformers | 3,698 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ha-en
* source languages: ha
* target languages: en
* OPUS readme: [ha-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.en | 35.0 | 0.506 |
| Tatoeba.ha.en | 39.0 | 0.497 |
|
pki/t5-small-finetuned_xsum | 65e211b0e8840a485ac8cbfc0f49ba4b8988f6e7 | 2022-03-15T13:42:02.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | pki | null | pki/t5-small-finetuned_xsum | 188 | 1 | transformers | 3,699 | ---
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned_xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 34.0559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned_xsum
This model is a fine-tuned version of [pki/t5-small-finetuned_xsum](https://huggingface.co/pki/t5-small-finetuned_xsum) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0479
- Rouge1: 34.0559
- Rouge2: 12.7506
- Rougel: 27.6762
- Rougelsum: 27.68
- Gen Len: 18.7924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1176 | 1.0 | 12753 | 2.0913 | 33.1548 | 11.8434 | 26.7805 | 26.7751 | 18.7805 |
| 2.1019 | 2.0 | 25506 | 2.0875 | 33.231 | 11.9329 | 26.8674 | 26.861 | 18.7992 |
| 2.1044 | 3.0 | 38259 | 2.0846 | 33.3643 | 11.9807 | 26.9817 | 26.9764 | 18.773 |
| 2.0874 | 4.0 | 51012 | 2.0832 | 33.3562 | 12.0681 | 27.0178 | 27.0189 | 18.7988 |
| 2.0791 | 5.0 | 63765 | 2.0803 | 33.38 | 12.081 | 27.0368 | 27.0344 | 18.7844 |
| 2.0894 | 6.0 | 76518 | 2.0787 | 33.2549 | 11.9662 | 26.8674 | 26.8669 | 18.7975 |
| 2.0802 | 7.0 | 89271 | 2.0777 | 33.3978 | 12.0828 | 27.0461 | 27.0443 | 18.7757 |
| 2.0719 | 8.0 | 102024 | 2.0743 | 33.4083 | 12.1141 | 27.0523 | 27.0457 | 18.7928 |
| 2.0782 | 9.0 | 114777 | 2.0748 | 33.3673 | 12.1637 | 27.0696 | 27.0663 | 18.7902 |
| 2.0736 | 10.0 | 127530 | 2.0713 | 33.5771 | 12.2219 | 27.1707 | 27.1706 | 18.7945 |
| 2.0816 | 11.0 | 140283 | 2.0703 | 33.5099 | 12.2069 | 27.1822 | 27.1835 | 18.8002 |
| 2.057 | 12.0 | 153036 | 2.0693 | 33.5853 | 12.2427 | 27.2096 | 27.2109 | 18.806 |
| 2.0584 | 13.0 | 165789 | 2.0676 | 33.4883 | 12.2674 | 27.1582 | 27.154 | 18.7857 |
| 2.0475 | 14.0 | 178542 | 2.0662 | 33.5529 | 12.2765 | 27.1897 | 27.1901 | 18.79 |
| 2.0426 | 15.0 | 191295 | 2.0643 | 33.6543 | 12.3545 | 27.2946 | 27.2928 | 18.8036 |
| 2.0373 | 16.0 | 204048 | 2.0648 | 33.6671 | 12.349 | 27.2649 | 27.2707 | 18.7905 |
| 2.0178 | 17.0 | 216801 | 2.0637 | 33.6794 | 12.4545 | 27.3015 | 27.3079 | 18.7948 |
| 2.0235 | 18.0 | 229554 | 2.0626 | 33.7635 | 12.423 | 27.3475 | 27.3446 | 18.7892 |
| 2.0296 | 19.0 | 242307 | 2.0622 | 33.7574 | 12.4651 | 27.3879 | 27.3882 | 18.8134 |
| 2.0319 | 20.0 | 255060 | 2.0595 | 33.9093 | 12.5389 | 27.5003 | 27.5001 | 18.7915 |
| 2.0208 | 21.0 | 267813 | 2.0583 | 33.7875 | 12.4912 | 27.4243 | 27.4332 | 18.7982 |
| 2.0151 | 22.0 | 280566 | 2.0581 | 33.8516 | 12.4805 | 27.46 | 27.4647 | 18.816 |
| 2.0188 | 23.0 | 293319 | 2.0575 | 33.7744 | 12.4548 | 27.381 | 27.382 | 18.802 |
| 2.0087 | 24.0 | 306072 | 2.0579 | 33.8953 | 12.4984 | 27.4675 | 27.4727 | 18.7819 |
| 2.0186 | 25.0 | 318825 | 2.0557 | 33.7766 | 12.4414 | 27.4025 | 27.4024 | 18.8005 |
| 2.0051 | 26.0 | 331578 | 2.0555 | 33.8973 | 12.5796 | 27.5338 | 27.5339 | 18.8153 |
| 2.0024 | 27.0 | 344331 | 2.0557 | 33.8709 | 12.5116 | 27.4684 | 27.4664 | 18.7911 |
| 1.9947 | 28.0 | 357084 | 2.0545 | 33.8499 | 12.5242 | 27.4677 | 27.4716 | 18.8025 |
| 1.9931 | 29.0 | 369837 | 2.0545 | 33.7957 | 12.5272 | 27.4129 | 27.4174 | 18.8 |
| 1.9826 | 30.0 | 382590 | 2.0548 | 33.9723 | 12.6665 | 27.5598 | 27.5662 | 18.7958 |
| 1.999 | 31.0 | 395343 | 2.0522 | 33.9702 | 12.6435 | 27.5788 | 27.579 | 18.795 |
| 1.9872 | 32.0 | 408096 | 2.0525 | 33.9546 | 12.638 | 27.5985 | 27.5949 | 18.7976 |
| 1.991 | 33.0 | 420849 | 2.0520 | 33.9792 | 12.6073 | 27.5686 | 27.5707 | 18.8056 |
| 2.0044 | 34.0 | 433602 | 2.0504 | 34.0736 | 12.6511 | 27.647 | 27.6472 | 18.8093 |
| 1.9972 | 35.0 | 446355 | 2.0513 | 34.0506 | 12.711 | 27.6533 | 27.6537 | 18.7984 |
| 1.9901 | 36.0 | 459108 | 2.0504 | 33.9991 | 12.638 | 27.626 | 27.6272 | 18.7996 |
| 1.9742 | 37.0 | 471861 | 2.0507 | 33.9357 | 12.6636 | 27.5673 | 27.5716 | 18.8064 |
| 1.984 | 38.0 | 484614 | 2.0502 | 33.9476 | 12.6589 | 27.58 | 27.5813 | 18.8037 |
| 1.9864 | 39.0 | 497367 | 2.0499 | 34.0733 | 12.7198 | 27.6926 | 27.6992 | 18.8061 |
| 1.9734 | 40.0 | 510120 | 2.0492 | 33.9483 | 12.6486 | 27.5571 | 27.5598 | 18.8033 |
| 1.9895 | 41.0 | 522873 | 2.0490 | 33.9753 | 12.684 | 27.6058 | 27.6086 | 18.8011 |
| 1.964 | 42.0 | 535626 | 2.0487 | 33.9528 | 12.6376 | 27.576 | 27.5824 | 18.7919 |
| 1.9849 | 43.0 | 548379 | 2.0487 | 33.9868 | 12.6936 | 27.6116 | 27.6158 | 18.7966 |
| 1.9798 | 44.0 | 561132 | 2.0491 | 34.0379 | 12.7161 | 27.6227 | 27.6315 | 18.7889 |
| 1.9837 | 45.0 | 573885 | 2.0473 | 34.0046 | 12.6559 | 27.5931 | 27.5988 | 18.7996 |
| 1.9556 | 46.0 | 586638 | 2.0483 | 34.0378 | 12.712 | 27.6346 | 27.6446 | 18.7942 |
| 1.9844 | 47.0 | 599391 | 2.0479 | 34.0301 | 12.7121 | 27.6492 | 27.6554 | 18.7999 |
| 1.9869 | 48.0 | 612144 | 2.0474 | 34.0463 | 12.7151 | 27.6542 | 27.6604 | 18.7919 |
| 1.9851 | 49.0 | 624897 | 2.0476 | 34.0549 | 12.7384 | 27.6542 | 27.6555 | 18.7924 |
| 1.9912 | 50.0 | 637650 | 2.0479 | 34.0559 | 12.7506 | 27.6762 | 27.68 | 18.7924 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.1
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.