modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
beston91/gpt2-xl_ft_mult_25k | 299c428a79c44d3abac41da6785f17401ee10ee7 | 2022-03-27T17:02:18.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl_ft_mult_25k | 5 | null | transformers | 17,000 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_25k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_25k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 136 | 0.6434 |
| No log | 1.99 | 272 | 0.5941 |
| No log | 2.99 | 408 | 0.5811 |
| 1.1604 | 3.99 | 544 | 0.5782 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 47.53093719482422 |
ahmeddbahaa/mt5-small-finetuned-mt5-en | fe112242e8a3553368958fd314e798e763ec4581 | 2022-03-24T20:02:45.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/mt5-small-finetuned-mt5-en | 5 | null | transformers | 17,001 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-mt5-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: english
metrics:
- name: Rouge1
type: rouge
value: 23.8952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-mt5-en
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8345
- Rouge1: 23.8952
- Rouge2: 5.8792
- Rougel: 18.6495
- Rougelsum: 18.7057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| No log | 1.0 | 224 | 3.0150 | 24.4639 | 5.3016 | 18.3987 | 18.4963 |
| No log | 2.0 | 448 | 2.8738 | 24.5075 | 5.842 | 18.8133 | 18.9072 |
| No log | 3.0 | 672 | 2.8345 | 23.8952 | 5.8792 | 18.6495 | 18.7057 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hackathon-pln-es/class-poems-es | f4538cce6a98bd55e575169d3d0d8939ddcd716f | 2022-03-28T16:11:33.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | hackathon-pln-es | null | hackathon-pln-es/class-poems-es | 5 | 4 | transformers | 17,002 | ---
license: apache-2.0
tags:
- generated_from_trainer
widget:
- text: "El amor es una experiencia universal que nos conmueve a todos, pero a veces no hallamos las palabras adecuadas para expresarlo. A lo largo de la historia los poetas han sabido decir aquello que todos sentimos de formas creativas y elocuentes."
- text: "Había un hombre a quien la Pena nombraba su amigo, Y él, soñando con su gran camarada la Pena, Iba andando con paso lento por las arenas resplandecientes Y zumbantes, donde van oleajes ventosos: Y llamó en voz alta a las estrellas para que se inclinaran Desde sus pálidos tronos. y lo consuelan, pero entre ellos se ríen y cantan siempre: Y entonces el hombre a quien la Tristeza nombró su amigo Gritó, ¡Mar oscuro, escucha mi más lastimosa historia! El mar avanzaba y seguía gritando su viejo grito, rodando en sueños de colina en colina. Huyó de la persecución de su gloria Y, en un valle lejano y apacible deteniéndose, Gritó toda su historia a las gotas de rocío que brillan. Pero nada oyeron, porque siempre están escuchando, Las gotas de rocío, por el sonido de su propio goteo. Y entonces el hombre a quien Triste nombró su amigo Buscó una vez más la orilla, y encontró una concha, Y pensó: Contaré mi pesada historia Hasta que mis propias palabras, resonando, envíen Su tristeza a través de un corazón hueco y perlado; Y mi propia historia volverá a cantar para mí, Y mis propias palabras susurrantes serán de consuelo, ¡Y he aquí! mi antigua carga puede partir. Luego cantó suavemente cerca del borde nacarado; Pero el triste habitante de los caminos marítimos solitarios Cambió todo lo que cantaba en un gemido inarticulado Entre sus torbellinos salvajes, olvidándolo."
- text: "Ven, ven, muerte, Y en triste ciprés déjame descansar. Vuela lejos, vuela lejos, respira; Soy asesinado por una bella y cruel doncella. Mi sudario de blanco, pegado todo con tejo, ¡Oh, prepáralo! Mi parte de la muerte, nadie tan fiel la compartió. Ni una flor, ni una flor dulce, En mi ataúd negro que se desparrame. Ni un amigo, ni un amigo saludan Mi pobre cadáver, donde mis huesos serán arrojados. Mil mil suspiros para salvar, Acuéstame, oh, donde Triste amante verdadero nunca encuentre mi tumba, ¡Para llorar allí!"
metrics:
- accuracy
model-index:
- name: classification-poems
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification-poems
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the spanish Poems Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8228
- Accuracy: 0.7241
## Model description
The model was trained to classify poems in Spanish, taking into account the content.
## Training and evaluation data
The original dataset has the columns author, content, title, year and type of poem.
For each example, the type of poem it belongs to is identified. Then the model will recognize which type of poem the entered content belongs to.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9344 | 1.0 | 258 | 0.7505 | 0.7586 |
| 0.9239 | 2.0 | 516 | 0.8228 | 0.7241 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
j-hartmann/concreteness-english-distilroberta-base | b8c52ae8a72378a15322b57bb0888c9be9161683 | 2022-03-25T10:03:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | j-hartmann | null | j-hartmann/concreteness-english-distilroberta-base | 5 | null | transformers | 17,003 | "Concreteness evaluates the degree to which the concept denoted by a word refers to a perceptible entity." (Brysbaert, Warriner, and Kuperman 2014, p. 904) |
Jingya/t5-large-finetuned-xsum | 963264e08ba991e4201d10c14650ef1a880ef565 | 2022-03-25T16:15:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jingya | null | Jingya/t5-large-finetuned-xsum | 5 | null | transformers | 17,004 | Entry not found |
UWB-AIR/MQDD-duplicates | fb3f52f1c10cf2d44e8bf28bc55f34b4a193f450 | 2022-04-05T06:24:29.000Z | [
"pytorch",
"longformer",
"feature-extraction",
"arxiv:2203.14093",
"transformers",
"license:cc-by-nc-sa-4.0"
] | feature-extraction | false | UWB-AIR | null | UWB-AIR/MQDD-duplicates | 5 | null | transformers | 17,005 | ---
license: cc-by-nc-sa-4.0
---
# MQDD - Multimodal Question Duplicity Detection
This repository publishes trained models and other supporting materials for the paper
[MQDD – Pre-training of Multimodal Question Duplicity Detection for Software Engineering Domain](https://arxiv.org/abs/2203.14093). For more information, see the paper.
The Stack Overflow Datasets (SOD) and Stack Overflow Duplicity Dataset (SODD) presented in the paper can be obtained from our [Stack Overflow Dataset repository](https://github.com/kiv-air/StackOverflowDataset).
To acquire the pre-trained model only, see the [UWB-AIR/MQDD-pretrained](https://huggingface.co/UWB-AIR/MQDD-pretrained).
## Fine-tuned MQDD
We release a fine-tuned version of our MQDD model for duplicate detection task. The model's architecture follows the architecture of a two-tower model as depicted in the figure below:
<img src="https://raw.githubusercontent.com/kiv-air/MQDD/master/img/architecture.png" width="700">
A self-standing encoder without a duplicate detection head can be loaded using the following source code snippet. Such a model can be used for building search systems based, for example, on [Faiss](https://github.com/facebookresearch/faiss) library.
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("UWB-AIR/MQDD-duplicates")
model = AutoModel.from_pretrained("UWB-AIR/MQDD-duplicates")
```
A checkpoint of a full two-tower model can than be obtained from our [GoogleDrive folder](https://drive.google.com/drive/folders/1CYiqF2GJ2fSQzx_oM4-X_IhpObi4af5Q?usp=sharing). To load the model, one needs to use the model's implementation from `models/MQDD_model.py` in our [GitHub repository](https://github.com/kiv-air/MQDD). To construct the model and load it's checkpoint, use the following source code:
```Python
from MQDD_model import ClsHeadModelMQDD
model = ClsHeadModelMQDD("UWB-AIR/MQDD-duplicates")
ckpt = torch.load("model.pt", map_location="cpu")
model.load_state_dict(ckpt["model_state"])
```
## Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/
## How should I cite the MQDD?
For now, please cite [the Arxiv paper](https://arxiv.org/abs/2203.14093):
```
@misc{https://doi.org/10.48550/arxiv.2203.14093,
doi = {10.48550/ARXIV.2203.14093},
url = {https://arxiv.org/abs/2203.14093},
author = {Pašek, Jan and Sido, Jakub and Konopík, Miloslav and Pražák, Ondřej},
title = {MQDD -- Pre-training of Multimodal Question Duplicity Detection for Software Engineering Domain},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
nairaxo/dev-multi | 470d815aaaa28b7a5738d1c954d11605fd5a849c | 2022-07-11T11:16:51.000Z | [
"wav2vec2",
"feature-extraction",
"multilingual",
"dataset:commonvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | nairaxo | null | nairaxo/dev-multi | 5 | null | speechbrain | 17,006 | ---
language: multilingual
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on multilingual African dataset
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a multilingual African dataset. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| Dataset Link | Language | Test WER |
|:-----------------:| -----:| -----:|
| [DVoice](https://zenodo.org/record/6342622) | Darija | 13.27 |
| [DVoice/VoxLingua107](https://zenodo.org/record/6342622) + [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) | Swahili | 29.31 |
| [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) | Fongbe | 10.26 |
| [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) | Wolof | 21.54 |
| [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) | Amharic | 31.15 |
# About DVoice
DVoice is a community initiative that aims to provide African languages and dialects with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each language. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling the recordings. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola and Soninke.
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="nairaxo/dvoice-multilingual", savedir="pretrained_models/asr-wav2vec2-dvoice-multi")
asr_model.transcribe_file('./the_path_to_your_audio_file')
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
To train the model from scratch, please see our GitHub tutorial [here](https://github.com/AIOXLABS/DVoice).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain |
emre/java-RoBERTa-Tara-small | 855a899d8860e0c60fed0d14ba9e4da9c5354d8b | 2022-03-27T21:19:45.000Z | [
"pytorch",
"roberta",
"fill-mask",
"java",
"code",
"dataset:code_search_net",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | emre | null | emre/java-RoBERTa-Tara-small | 5 | 1 | transformers | 17,007 | ---
language:
- java
- code
license: apache-2.0
datasets:
- code_search_net
widget:
- text: 'public <mask> isOdd(Integer num){if (num % 2 == 0) {return "even";} else {return "odd";}}'
---
## JavaRoBERTa-Tara
A RoBERTa model pretrained on, code_search_net Java software code.
### Training Data
The model was trained on 10,223,695 Java files retrieved from open source projects on GitHub.
### Training Objective
A MLM (Masked Language Model) objective was used to train this model.
### Usage
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model='emre/java-RoBERTa-Tara-small')
output = pipe(CODE) # Replace with Java code; Use '<mask>' to mask tokens/words in the code.
```
### Why Tara?
she is the name of my little baby girl :) |
PaddyP/distilbert-base-uncased-finetuned-emotion | 55a5ac830de049e65cb30f645d95aebfba81eeb5 | 2022-03-27T07:06:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | PaddyP | null | PaddyP/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 17,008 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2302
- Accuracy: 0.922
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3344 | 0.903 | 0.9004 |
| No log | 2.0 | 500 | 0.2302 | 0.922 | 0.9218 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
ScandinavianMrT/distilbert_ONION_1epoch_3.0 | 8f66a1d6c168fa151bddca8f3c42b9b06b2fa757 | 2022-03-27T08:00:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | ScandinavianMrT | null | ScandinavianMrT/distilbert_ONION_1epoch_3.0 | 5 | null | transformers | 17,009 | Entry not found |
hackathon-pln-es/wav2vec2-base-finetuned-sentiment-mesd | 7127d605e834e22c1cadc70a85f930c94c6e548b | 2022-04-04T02:40:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | hackathon-pln-es | null | hackathon-pln-es/wav2vec2-base-finetuned-sentiment-mesd | 5 | 4 | transformers | 17,010 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-sentiment-mesd
results: []
---
# wav2vec2-base-finetuned-sentiment-mesd
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [MESD](https://huggingface.co/hackathon-pln-es/MESD) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5729
- Accuracy: 0.8308
## Model description
This model was trained to classify underlying sentiment of Spanish audio/speech.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.25e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.5729 | 0.8308 |
| No log | 2.0 | 14 | 0.6577 | 0.8 |
| 0.1602 | 3.0 | 21 | 0.7055 | 0.8 |
| 0.1602 | 4.0 | 28 | 0.8696 | 0.7615 |
| 0.1602 | 5.0 | 35 | 0.6807 | 0.7923 |
| 0.1711 | 6.0 | 42 | 0.7303 | 0.7923 |
| 0.1711 | 7.0 | 49 | 0.7028 | 0.8077 |
| 0.1711 | 8.0 | 56 | 0.7368 | 0.8 |
| 0.1608 | 9.0 | 63 | 0.7190 | 0.7923 |
| 0.1608 | 10.0 | 70 | 0.6913 | 0.8077 |
| 0.1608 | 11.0 | 77 | 0.7047 | 0.8077 |
| 0.1753 | 12.0 | 84 | 0.6801 | 0.8 |
| 0.1753 | 13.0 | 91 | 0.7208 | 0.7769 |
| 0.1753 | 14.0 | 98 | 0.7458 | 0.7846 |
| 0.203 | 15.0 | 105 | 0.6494 | 0.8077 |
| 0.203 | 16.0 | 112 | 0.6256 | 0.8231 |
| 0.203 | 17.0 | 119 | 0.6788 | 0.8 |
| 0.1919 | 18.0 | 126 | 0.6757 | 0.7846 |
| 0.1919 | 19.0 | 133 | 0.6859 | 0.7846 |
| 0.1641 | 20.0 | 140 | 0.6832 | 0.7846 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.10.3
|
Jiexing/sparc_relation_t5_3b-2432 | b144117ea15f3f88f1f0e20acc860179a66c86dd | 2022-03-27T14:38:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jiexing | null | Jiexing/sparc_relation_t5_3b-2432 | 5 | null | transformers | 17,011 | Entry not found |
tau/pegasus_4_1024_0.3_epoch1 | d57c5bc093fc4ceab75c58da7d78c05734c2c1c7 | 2022-03-28T04:32:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/pegasus_4_1024_0.3_epoch1 | 5 | null | transformers | 17,012 | Entry not found |
GleamEyeBeast/ascend | 52c68be2caaaed0f907e2f0aa23db8f15f47198b | 2022-03-29T16:49:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | GleamEyeBeast | null | GleamEyeBeast/ascend | 5 | null | transformers | 17,013 | ---
tags:
- generated_from_trainer
model-index:
- name: ascend
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ascend
This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3718
- Wer: 0.6412
- Cer: 0.2428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.5769 | 1.0 | 688 | 1.1864 | 0.7716 | 0.3159 |
| 0.5215 | 2.0 | 1376 | 1.1613 | 0.7504 | 0.2965 |
| 0.4188 | 3.0 | 2064 | 1.1644 | 0.7389 | 0.2950 |
| 0.3695 | 4.0 | 2752 | 1.1937 | 0.7184 | 0.2815 |
| 0.3404 | 5.0 | 3440 | 1.1947 | 0.7083 | 0.2719 |
| 0.2885 | 6.0 | 4128 | 1.2314 | 0.7108 | 0.2685 |
| 0.2727 | 7.0 | 4816 | 1.2243 | 0.6850 | 0.2616 |
| 0.2417 | 8.0 | 5504 | 1.2506 | 0.6767 | 0.2608 |
| 0.2207 | 9.0 | 6192 | 1.2804 | 0.6922 | 0.2595 |
| 0.2195 | 10.0 | 6880 | 1.2582 | 0.6818 | 0.2575 |
| 0.1896 | 11.0 | 7568 | 1.3101 | 0.6814 | 0.2545 |
| 0.1961 | 12.0 | 8256 | 1.2793 | 0.6706 | 0.2526 |
| 0.1752 | 13.0 | 8944 | 1.2643 | 0.6584 | 0.2509 |
| 0.1638 | 14.0 | 9632 | 1.3152 | 0.6588 | 0.2482 |
| 0.1522 | 15.0 | 10320 | 1.3098 | 0.6433 | 0.2439 |
| 0.1351 | 16.0 | 11008 | 1.3253 | 0.6537 | 0.2447 |
| 0.1266 | 17.0 | 11696 | 1.3394 | 0.6365 | 0.2418 |
| 0.1289 | 18.0 | 12384 | 1.3718 | 0.6412 | 0.2443 |
| 0.1204 | 19.0 | 13072 | 1.3708 | 0.6433 | 0.2433 |
| 0.1189 | 20.0 | 13760 | 1.3718 | 0.6412 | 0.2428 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
abdusahmbzuai/aradia-class-v1 | c284997a458940783dba46514655310bc2226f71 | 2022-04-04T10:01:57.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | abdusahmbzuai | null | abdusahmbzuai/aradia-class-v1 | 5 | null | transformers | 17,014 | Entry not found |
Cheatham/xlm-roberta-large-finetuned-d1-003 | 7892085ce6e8a9763a6c238da6436a65aa260014 | 2022-03-30T15:15:42.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d1-003 | 5 | null | transformers | 17,015 | Entry not found |
hackathon-pln-es/readability-es-3class-sentences | 0ec8dc6118476746de1e99fc61790f5d18ad6404 | 2022-04-04T10:41:57.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"transformers",
"spanish",
"bertin",
"license:cc-by-4.0"
] | text-classification | false | hackathon-pln-es | null | hackathon-pln-es/readability-es-3class-sentences | 5 | 2 | transformers | 17,016 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
- bertin
pipeline_tag: text-classification
widget:
- text: Las Líneas de Nazca son una serie de marcas trazadas en el suelo, cuya anchura oscila entre los 40 y los 110 centímetros.
- text: Hace mucho tiempo, en el gran océano que baña las costas del Perú no había peces.
---
# Readability ES Sentences for three classes
Model based on the Roberta architecture finetuned on [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for readability assessment of Spanish texts.
## Description and performance
This version of the model was trained on a mix of datasets, using sentence-level granularity when possible. The model performs classification among three complexity levels:
- Basic.
- Intermediate.
- Advanced.
The relationship of these categories with the Common European Framework of Reference for Languages is described in [our report](https://wandb.ai/readability-es/readability-es/reports/Texts-Readability-Analysis-for-Spanish--VmlldzoxNzU2MDUx).
This model achieves a F1 macro average score of 0.6951, measured on the validation set.
## Model variants
- [`readability-es-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-sentences). Two classes, sentence-based dataset.
- [`readability-es-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-paragraphs). Two classes, paragraph-based dataset.
- `readability-es-3class-sentences` (this model). Three classes, sentence-based dataset.
- [`readability-es-3class-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-3class-paragraphs). Three classes, paragraph-based dataset.
## Datasets
- [`readability-es-hackathon-pln-public`](https://huggingface.co/datasets/hackathon-pln-es/readability-es-hackathon-pln-public), composed of:
* coh-metrix-esp corpus.
* Various text resources scraped from websites.
- Other non-public datasets: newsela-es, simplext.
## Training details
Please, refer to [this training run](https://wandb.ai/readability-es/readability-es/runs/1qe3kbqj/overview) for full details on hyperparameters and training regime.
## Biases and Limitations
- Due to the scarcity of data and the lack of a reliable gold test set, performance metrics are reported on the validation set.
- One of the datasets involved is the Spanish version of newsela, which is frequently used as a reference. However, it was created by translating previous datasets, and therefore it may contain somewhat unnatural phrases.
- Some of the datasets used cannot be publicly disseminated, making it more difficult to assess the existence of biases or mistakes.
- Language might be biased towards the Spanish dialect spoken in Spain. Other regional variants might be sub-represented.
- No effort has been performed to alleviate the shortcomings and biases described in the [original implementation of BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish#bias-examples-spanish).
## Authors
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
|
Tahsin-Mayeesha/distilbert-finetuned-fakenews | 3e92f83efc6827517f030b648da21b9fceb2b2c3 | 2022-03-31T17:11:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Tahsin-Mayeesha | null | Tahsin-Mayeesha/distilbert-finetuned-fakenews | 5 | null | transformers | 17,017 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-finetuned-fakenews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-fakenews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0049
- Accuracy: 0.9995
- F1: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0392 | 1.0 | 500 | 0.0059 | 0.999 | 0.999 |
| 0.002 | 2.0 | 1000 | 0.0047 | 0.9995 | 0.9995 |
| 0.0001 | 3.0 | 1500 | 0.0047 | 0.9995 | 0.9995 |
| 0.0001 | 4.0 | 2000 | 0.0049 | 0.9995 | 0.9995 |
| 0.0 | 5.0 | 2500 | 0.0049 | 0.9995 | 0.9995 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
redwoodresearch/injuriousness-classifier-29apr-manual | f8b86553c0239c772bcf977d6ad0544ddab3ab06 | 2022-03-31T17:28:22.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers"
] | text-classification | false | redwoodresearch | null | redwoodresearch/injuriousness-classifier-29apr-manual | 5 | null | transformers | 17,018 | Entry not found |
redwoodresearch/injuriousness-classifier-29apr-paraphrases | 8f9ccede7579beff67893f01fb865a483765f0d5 | 2022-03-31T17:32:36.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers"
] | text-classification | false | redwoodresearch | null | redwoodresearch/injuriousness-classifier-29apr-paraphrases | 5 | null | transformers | 17,019 | Entry not found |
Aymene/Fake-news-detection-bert-based-uncased | 5857d13433586740c21238361937bf53920a5667 | 2022-04-02T02:42:06.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Aymene | null | Aymene/Fake-news-detection-bert-based-uncased | 5 | null | transformers | 17,020 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Fake-news-detection-bert-based-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fake-news-detection-bert-based-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
RupaliP/wikineural-multilingual-ner | e636179e65ea9c5a0d7aa0d7e8d2c2260a5a0787 | 2022-04-11T11:54:43.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | RupaliP | null | RupaliP/wikineural-multilingual-ner | 5 | null | transformers | 17,021 | Entry not found |
mp6kv/paper_feedback_intent | fa9268761b8245186eb34356ae3a282d34036e09 | 2022-04-02T21:42:38.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mp6kv | null | mp6kv/paper_feedback_intent | 5 | null | transformers | 17,022 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: paper_feedback_intent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paper_feedback_intent
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3621
- Accuracy: 0.9302
- Precision: 0.9307
- Recall: 0.9302
- F1: 0.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9174 | 1.0 | 11 | 0.7054 | 0.7907 | 0.7903 | 0.7907 | 0.7861 |
| 0.6917 | 2.0 | 22 | 0.4665 | 0.8140 | 0.8134 | 0.8140 | 0.8118 |
| 0.4276 | 3.0 | 33 | 0.3326 | 0.9070 | 0.9065 | 0.9070 | 0.9041 |
| 0.2656 | 4.0 | 44 | 0.3286 | 0.9070 | 0.9065 | 0.9070 | 0.9041 |
| 0.1611 | 5.0 | 55 | 0.3044 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.1025 | 6.0 | 66 | 0.3227 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.0799 | 7.0 | 77 | 0.3216 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.0761 | 8.0 | 88 | 0.3529 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.0479 | 9.0 | 99 | 0.3605 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.0358 | 10.0 | 110 | 0.3621 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Prinernian/distilbert-base-uncased-finetuned-emotion | 270938273f7371185bd4d7c28617ccea6d4ca9d7 | 2022-04-03T09:11:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Prinernian | null | Prinernian/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 17,023 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2208
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8538 | 1.0 | 250 | 0.3317 | 0.904 | 0.8999 |
| 0.2599 | 2.0 | 500 | 0.2208 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
Zohar/distilgpt2-finetuned-restaurant-reviews-clean | 192775c4bf12398beab31035671e61106ae22a4c | 2022-04-03T10:29:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Zohar | null | Zohar/distilgpt2-finetuned-restaurant-reviews-clean | 5 | null | transformers | 17,024 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-restaurant-reviews-clean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-restaurant-reviews-clean
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7221 | 1.0 | 2447 | 3.5979 |
| 3.6413 | 2.0 | 4894 | 3.5505 |
| 3.6076 | 3.0 | 7341 | 3.5371 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.11.0
|
JB173/distilbert-base-uncased-finetuned-emotion | 17bd8e4b6269181c9e3fe79059a08d847cdb0b77 | 2022-04-03T15:27:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | JB173 | null | JB173/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 17,025 | Entry not found |
LeBenchmark/wav2vec-FR-1K-Female-large | 6a969b5a94bbaa0fa951670954e7993f8e7c33e4 | 2022-05-11T09:22:44.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"fr",
"arxiv:2204.01397",
"transformers",
"license:apache-2.0"
] | null | false | LeBenchmark | null | LeBenchmark/wav2vec-FR-1K-Female-large | 5 | null | transformers | 17,026 | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 1K hours of French *female-only* speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech.
For more information about our gender study for SSL moddels, please refer to our paper at: [A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems](https://arxiv.org/abs/2204.01397)
## Model and data descriptions
We release four gender-specific models trained on 1K hours of speech.
- [wav2vec2-FR-1K-Male-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-large/)
- [wav2vec2-FR-1k-Male-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-base/)
- [wav2vec2-FR-1K-Female-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/)
- [wav2vec2-FR-1K-Female-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-base/)
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Referencing our gender-specific models
```
@article{boito2022study,
title={A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems},
author={Marcely Zanon Boito and Laurent Besacier and Natalia Tomashenko and Yannick Est{\`e}ve},
journal={arXiv preprint arXiv:2204.01397},
year={2022}
}
```
## Referencing LeBenchmark
```
@inproceedings{evain2021task,
title={Task agnostic and task specific self-supervised learning from speech with \textit{LeBenchmark}},
author={Evain, Sol{\`e}ne and Nguyen, Ha and Le, Hang and Boito, Marcely Zanon and Mdhaffar, Salima and Alisamir, Sina and Tong, Ziyi and Tomashenko, Natalia and Dinarelli, Marco and Parcollet, Titouan and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
``` |
microsoft/cvt-21-384 | d73b0f45502a83c5378535ee7ec9b3379de0a8bc | 2022-05-18T16:11:18.000Z | [
"pytorch",
"cvt",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.15808",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/cvt-21-384 | 5 | null | transformers | 17,027 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Convolutional Vision Transformer (CvT)
CvT-21 model pre-trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Wu et al. and first released in [this repository](https://github.com/microsoft/CvT).
Disclaimer: The team releasing CvT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, CvtForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('microsoft/cvt-21-384')
model = CvtForImageClassification.from_pretrained('microsoft/cvt-21-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
``` |
dapang/distilbert-base-uncased-finetuned-moral-action | f270b3aa0c60aae13ed94f2c5ba0cb30472011fc | 2022-04-05T03:21:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dapang | null | dapang/distilbert-base-uncased-finetuned-moral-action | 5 | null | transformers | 17,028 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-moral-action
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-moral-action
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4632
- Accuracy: 0.7912
- F1: 0.7912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.716387809233253e-05
- train_batch_size: 2000
- eval_batch_size: 2000
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 10 | 0.5406 | 0.742 | 0.7399 |
| No log | 2.0 | 20 | 0.4810 | 0.7628 | 0.7616 |
| No log | 3.0 | 30 | 0.4649 | 0.786 | 0.7856 |
| No log | 4.0 | 40 | 0.4600 | 0.7916 | 0.7916 |
| No log | 5.0 | 50 | 0.4632 | 0.7912 | 0.7912 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.11.0
|
dennishe97/longformer-code-mlm | f2e90ff8aea84dac2042aad0e6da1659bbb873f1 | 2022-04-05T05:45:16.000Z | [
"pytorch",
"longformer",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dennishe97 | null | dennishe97/longformer-code-mlm | 5 | null | transformers | 17,029 | Entry not found |
Seethal/Distilbert-base-uncased-fine-tuned-service-bc | 6a16656fa7fc113f0e95c84f757f92235eb26d79 | 2022-04-05T16:16:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Seethal | null | Seethal/Distilbert-base-uncased-fine-tuned-service-bc | 5 | null | transformers | 17,030 | # Sentiment analysis model |
Kuray107/ls-timit-100percent-supervised-aug | 986164050266757977fdca09c9cf452d8687358a | 2022-04-05T20:18:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/ls-timit-100percent-supervised-aug | 5 | null | transformers | 17,031 | ---
tags:
- generated_from_trainer
model-index:
- name: ls-timit-100percent-supervised-aug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ls-timit-100percent-supervised-aug
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0519
- Wer: 0.0292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2985 | 7.04 | 1000 | 0.0556 | 0.0380 |
| 0.1718 | 14.08 | 2000 | 0.0519 | 0.0292 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Stremie/roberta-base-clickbait | ddc52807c0bb3f6c524ddb5c59e9e80d098d1372 | 2022-04-18T12:51:37.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Stremie | null | Stremie/roberta-base-clickbait | 5 | null | transformers | 17,032 | This model classifies whether a tweet is clickbait or not. It has been trained using [Webis-Clickbait-17](https://webis.de/data/webis-clickbait-17.html) dataset. Input is composed of 'postText'. Achieved ~0.7 F1-score on test data. |
ICLbioengNLP/CXR_BioClinicalBERT_MLM | 7aa68734249bc8e6785caf8095ab7ca86894101b | 2022-04-06T19:53:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ICLbioengNLP | null | ICLbioengNLP/CXR_BioClinicalBERT_MLM | 5 | null | transformers | 17,033 | Entry not found |
Sleoruiz/distilbert-base-uncased-finetuned-cola | 142948e2a2c09e34a65732de24437526bd226c84 | 2022-04-07T13:15:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Sleoruiz | null | Sleoruiz/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 17,034 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5396261051709696
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7663
- Matthews Correlation: 0.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5281 | 1.0 | 535 | 0.5268 | 0.4071 |
| 0.3503 | 2.0 | 1070 | 0.5074 | 0.5126 |
| 0.2399 | 3.0 | 1605 | 0.6440 | 0.4977 |
| 0.1807 | 4.0 | 2140 | 0.7663 | 0.5396 |
| 0.1299 | 5.0 | 2675 | 0.8786 | 0.5192 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
birgermoell/psst-fairseq-common-voice | c9772d2df0d10cc540a60b2dbee7f5dfa1da89c3 | 2022-04-07T08:30:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-fairseq-common-voice | 5 | null | transformers | 17,035 | Entry not found |
AmanPriyanshu/fake-news-detector | 19949c17e66e05292f23178662d0c8d3a16390cb | 2022-04-07T13:17:05.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | AmanPriyanshu | null | AmanPriyanshu/fake-news-detector | 5 | null | transformers | 17,036 | Entry not found |
arijitx/IndicBART-bn-QuestionGeneration | 0892b45936e46c0d711b119e8d753b61b1fb2ec0 | 2022-04-07T14:24:09.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"bn",
"arxiv:2203.05437",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | arijitx | null | arijitx/IndicBART-bn-QuestionGeneration | 5 | null | transformers | 17,037 | ---
license: mit
language:
- bn
tags:
- text2text-generation
widget:
- text: "১৮৯৭ খ্রিষ্টাব্দের ২৩ জানুয়ারি [SEP] সুভাষ ১৮৯৭ খ্রিষ্টাব্দের ২৩ জানুয়ারি ব্রিটিশ ভারতের অন্তর্গত বাংলা প্রদেশের উড়িষ্যা বিভাগের কটকে জন্মগ্রহণ করেন। </s> <2bn>"
---
## Intro
Trained on IndicNLGSuit [IndicQuestionGeneration](https://huggingface.co/datasets/ai4bharat/IndicQuestionGeneration) data for Bengali the model is finetuned from [IndicBART](https://huggingface.co/ai4bharat/IndicBART)
## Finetuned Command
python run_summarization.py --model_name_or_path bnQG_models/checkpoint-32000 --do_eval --train_file train_bn.json
--validation_file valid_bn.json --output_dir bnQG_models --overwrite_output_dir --per_device_train_batch_size=2
--per_device_eval_batch_size=4 --predict_with_generate --text_column src --summary_column tgt --save_steps 4000
--evaluation_strategy steps --gradient_accumulation_steps 4 --eval_steps 1000 --learning_rate 0.001 --num_beams 4
--forced_bos_token "<2bn>" --num_train_epochs 10 --warmup_steps 10000
## Sample Line from train data
{"src": "प्राणबादी [SEP] अर्थाॎ, तिनि छिलेन एकजन सर्बप्राणबादी। </s> <2bn>", "tgt": "<2bn> कोन दार्शनिक दृष्टिभङ्गि ओय़ाइटजेर छिल? </s>"}
## Inference
script = "সুভাষ ১৮৯৭ খ্রিষ্টাব্দের ২৩ জানুয়ারি ব্রিটিশ ভারতের অন্তর্গত বাংলা প্রদেশের উড়িষ্যা বিভাগের (অধুনা, ভারতের ওড়িশা রাজ্য) কটকে জন্মগ্রহণ করেন।"
answer = "১৮৯৭ খ্রিষ্টাব্দের ২৩ জানুয়ারি"
inp = answer +" [SEP] "+script + " </s> <2bn>"
inp_tok = tokenizer(inp, add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model.eval() # Set dropouts to zero
model_output=model.generate(inp_tok, use_cache=True,
num_beams=4,
max_length=20,
min_length=1,
early_stopping=True,
pad_token_id=pad_id,
bos_token_id=bos_id,
eos_token_id=eos_id,
decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2bn>")
)
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
## Citations
@inproceedings{dabre2021indicbart,
title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages},
author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
booktitle={Findings of the Association for Computational Linguistics},
}
@misc{kumar2022indicnlg,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
eprint={2203.05437},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
mdroth/distilbert-base-uncased-finetuned-ner | 933d196076eebdd35a9b7d17c6613c778adb95f2 | 2022-07-13T23:40:24.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | mdroth | null | mdroth/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 17,038 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9299878143347735
- name: Recall
type: recall
value: 0.9391430808815304
- name: F1
type: f1
value: 0.93454302571524
- name: Accuracy
type: accuracy
value: 0.9841453921553053
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0635
- Precision: 0.9300
- Recall: 0.9391
- F1: 0.9345
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0886 | 1.0 | 1756 | 0.0676 | 0.9198 | 0.9233 | 0.9215 | 0.9809 |
| 0.0382 | 2.0 | 3512 | 0.0605 | 0.9271 | 0.9360 | 0.9315 | 0.9836 |
| 0.0247 | 3.0 | 5268 | 0.0635 | 0.9300 | 0.9391 | 0.9345 | 0.9841 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
btjiong/robbert-twitter-sentiment-custom | 658316088e2cf690b86b3b619ee678967c486a56 | 2022-04-08T08:17:25.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:dutch_social",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | btjiong | null | btjiong/robbert-twitter-sentiment-custom | 5 | null | transformers | 17,039 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- dutch_social
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: robbert-twitter-sentiment-custom
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: dutch_social
type: dutch_social
args: dutch_social
metrics:
- name: Accuracy
type: accuracy
value: 0.788
- name: F1
type: f1
value: 0.7878005279207152
- name: Precision
type: precision
value: 0.7877102066609215
- name: Recall
type: recall
value: 0.788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbert-twitter-sentiment-custom
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the dutch_social dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6656
- Accuracy: 0.788
- F1: 0.7878
- Precision: 0.7877
- Recall: 0.788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8287 | 1.0 | 282 | 0.7178 | 0.7007 | 0.6958 | 0.6973 | 0.7007 |
| 0.4339 | 2.0 | 564 | 0.5873 | 0.7667 | 0.7668 | 0.7681 | 0.7667 |
| 0.2045 | 3.0 | 846 | 0.6656 | 0.788 | 0.7878 | 0.7877 | 0.788 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Cheltone/BERT_Base_Finetuned_C19Vax | 66059e96dc03dda609da7fd5cec5ce86019e252c | 2022-04-08T10:55:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Cheltone | null | Cheltone/BERT_Base_Finetuned_C19Vax | 5 | null | transformers | 17,040 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- accuracy
- f1
model-index:
- name: Bert_Test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert_Test
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1965
- Precision: 0.9332
- Accuracy: 0.9223
- F1: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:--------:|:------:|
| 0.6717 | 0.4 | 500 | 0.6049 | 0.7711 | 0.6743 | 0.6112 |
| 0.5704 | 0.8 | 1000 | 0.5299 | 0.7664 | 0.7187 | 0.6964 |
| 0.52 | 1.2 | 1500 | 0.4866 | 0.7698 | 0.7537 | 0.7503 |
| 0.4792 | 1.6 | 2000 | 0.4292 | 0.8031 | 0.793 | 0.7927 |
| 0.4332 | 2.0 | 2500 | 0.3920 | 0.8318 | 0.8203 | 0.8198 |
| 0.381 | 2.4 | 3000 | 0.3723 | 0.9023 | 0.8267 | 0.8113 |
| 0.3625 | 2.8 | 3500 | 0.3134 | 0.8736 | 0.8607 | 0.8601 |
| 0.3325 | 3.2 | 4000 | 0.2924 | 0.8973 | 0.871 | 0.8683 |
| 0.3069 | 3.6 | 4500 | 0.2671 | 0.8916 | 0.8847 | 0.8851 |
| 0.2866 | 4.0 | 5000 | 0.2571 | 0.8920 | 0.8913 | 0.8926 |
| 0.2595 | 4.4 | 5500 | 0.2450 | 0.8980 | 0.9 | 0.9015 |
| 0.2567 | 4.8 | 6000 | 0.2246 | 0.9057 | 0.9043 | 0.9054 |
| 0.2255 | 5.2 | 6500 | 0.2263 | 0.9332 | 0.905 | 0.9030 |
| 0.2237 | 5.6 | 7000 | 0.2083 | 0.9265 | 0.9157 | 0.9156 |
| 0.2248 | 6.0 | 7500 | 0.2039 | 0.9387 | 0.9193 | 0.9185 |
| 0.2086 | 6.4 | 8000 | 0.2038 | 0.9436 | 0.9193 | 0.9181 |
| 0.2029 | 6.8 | 8500 | 0.1965 | 0.9332 | 0.9223 | 0.9223 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Splend1dchan/canine-c-squad | 3106343c2dfdea56198087ed9ff582a60e344892 | 2022-04-08T14:42:24.000Z | [
"pytorch",
"canine",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Splend1dchan | null | Splend1dchan/canine-c-squad | 5 | null | transformers | 17,041 | python run_squad.py \
--model_name_or_path google/canine-c \
--do_train \
--do_eval \
--per_gpu_train_batch_size 1 \
--per_gpu_eval_batch_size 1 \
--gradient_accumulation_steps 128 \
--learning_rate 3e-5 \
--num_train_epochs 3 \
--max_seq_length 1024 \
--doc_stride 128 \
--max_answer_length 240 \
--output_dir canine-c-squad \
--model_type bert
{
"_name_or_path": "google/canine-c",
"architectures": [
"CanineForQuestionAnswering"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 57344,
"downsampling_rate": 4,
"eos_token_id": 57345,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"local_transformer_stride": 128,
"max_position_embeddings": 16384,
"model_type": "canine",
"num_attention_heads": 12,
"num_hash_buckets": 16384,
"num_hash_functions": 8,
"num_hidden_layers": 12,
"pad_token_id": 0,
"torch_dtype": "float32",
"transformers_version": "4.19.0.dev0",
"type_vocab_size": 16,
"upsampling_kernel_size": 4,
"use_cache": true
}
{'exact': 58.893093661305585, 'f1': 72.18823344945899} |
Eugen/distilbert-base-uncased-finetuned-stsb | 89fb267f4ab52276278c66ef0a6b4f1b4938fd27 | 2022-04-08T20:00:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Eugen | null | Eugen/distilbert-base-uncased-finetuned-stsb | 5 | null | transformers | 17,042 | Entry not found |
caush/TestMeanFraction2 | 953cbc5ee6c1024ab8d9e8b0550b607f6d2022e9 | 2022-04-08T17:51:14.000Z | [
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | caush | null | caush/TestMeanFraction2 | 5 | null | transformers | 17,043 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: TestMeanFraction2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TestMeanFraction2
This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3967
- Matthews Correlation: 0.2537
## Model description
More information needed
## Intended uses & limitations
"La panique totale" Cette femme trouve une énorme araignée suspendue à sa douche.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 0.13 | 50 | 1.1126 | 0.1589 |
| No log | 0.25 | 100 | 1.0540 | 0.1884 |
| No log | 0.38 | 150 | 1.1533 | 0.0818 |
| No log | 0.51 | 200 | 1.0676 | 0.1586 |
| No log | 0.64 | 250 | 0.9949 | 0.2280 |
| No log | 0.76 | 300 | 1.0343 | 0.2629 |
| No log | 0.89 | 350 | 1.0203 | 0.2478 |
| No log | 1.02 | 400 | 1.0041 | 0.2752 |
| No log | 1.15 | 450 | 1.0808 | 0.2256 |
| 1.023 | 1.27 | 500 | 1.0029 | 0.2532 |
| 1.023 | 1.4 | 550 | 1.0204 | 0.2508 |
| 1.023 | 1.53 | 600 | 1.1377 | 0.1689 |
| 1.023 | 1.65 | 650 | 1.0499 | 0.2926 |
| 1.023 | 1.78 | 700 | 1.0441 | 0.2474 |
| 1.023 | 1.91 | 750 | 1.0279 | 0.2611 |
| 1.023 | 2.04 | 800 | 1.1511 | 0.2804 |
| 1.023 | 2.16 | 850 | 1.2381 | 0.2512 |
| 1.023 | 2.29 | 900 | 1.3340 | 0.2385 |
| 1.023 | 2.42 | 950 | 1.4372 | 0.2842 |
| 0.7325 | 2.54 | 1000 | 1.3967 | 0.2537 |
| 0.7325 | 2.67 | 1050 | 1.4272 | 0.2624 |
| 0.7325 | 2.8 | 1100 | 1.3869 | 0.1941 |
| 0.7325 | 2.93 | 1150 | 1.4983 | 0.2063 |
| 0.7325 | 3.05 | 1200 | 1.4959 | 0.2409 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+0aef44c
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mcclane/movie-director-predictor | 7eacf983e7878ad9a1ae0d63aadc620c9e78b94e | 2022-04-08T20:41:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | mcclane | null | mcclane/movie-director-predictor | 5 | null | transformers | 17,044 | Entry not found |
malcolm/TSC_finetuning-sentiment-movie-model2 | 57d8ebfe6711e670f06bbd476737d8d692c9723d | 2022-04-09T03:26:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | malcolm | null | malcolm/TSC_finetuning-sentiment-movie-model2 | 5 | null | transformers | 17,045 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: TSC_finetuning-sentiment-movie-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSC_finetuning-sentiment-movie-model2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1479
- Accuracy: 0.957
- F1: 0.9752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Cheltone/DistilRoBERTa-C19-Vax-Fine-tuned | a11d6ad460066b74a84254e9b968a723c640009a | 2022-04-12T00:34:14.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Cheltone | null | Cheltone/DistilRoBERTa-C19-Vax-Fine-tuned | 5 | null | transformers | 17,046 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- accuracy
- f1
model-index:
- name: DistilRoberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilRoberta
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1246
- Precision: 0.9633
- Accuracy: 0.9697
- F1: 0.9705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:--------:|:------:|
| 0.5894 | 0.4 | 500 | 0.4710 | 0.8381 | 0.7747 | 0.7584 |
| 0.3863 | 0.8 | 1000 | 0.3000 | 0.8226 | 0.8737 | 0.8858 |
| 0.2272 | 1.2 | 1500 | 0.1973 | 0.9593 | 0.9333 | 0.9329 |
| 0.1639 | 1.6 | 2000 | 0.1694 | 0.9067 | 0.9367 | 0.9403 |
| 0.1263 | 2.0 | 2500 | 0.1128 | 0.9657 | 0.9597 | 0.9603 |
| 0.0753 | 2.4 | 3000 | 0.1305 | 0.9614 | 0.967 | 0.9679 |
| 0.0619 | 2.8 | 3500 | 0.1246 | 0.9633 | 0.9697 | 0.9705 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cathen/test_model_car | 1bf8f5bb42463f1a63cfb906257c628be212b3b6 | 2022-04-10T22:06:35.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | cathen | null | cathen/test_model_car | 5 | null | transformers | 17,047 | Entry not found |
baikal/electra-wp30 | cbae3154f40344932954bd7277ffdc6ddb0827f9 | 2022-04-11T03:48:41.000Z | [
"pytorch",
"electra",
"pretraining",
"ko",
"dataset:한국어위키",
"dataset:국립국어원 문어데이터셋",
"transformers"
] | null | false | baikal | null | baikal/electra-wp30 | 5 | null | transformers | 17,048 | ---
language: ko
datasets:
- 한국어위키
- 국립국어원 문어데이터셋
---
ELECTRA-base
---
- model: electra-base-discriminator
- vocab: bert-wordpiece, 30,000
|
vocab-transformers/distilbert-word2vec_256k-MLM_best | ee249caa94fb88a16954a85137efc67000c424a1 | 2022-04-11T11:13:13.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vocab-transformers | null | vocab-transformers/distilbert-word2vec_256k-MLM_best | 5 | null | transformers | 17,049 | # DistilBERT with word2vec token embeddings
This model has a word2vec token embedding matrix with 256k entries. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs.
Then the model was trained on this dataset with MLM for 1.37M steps (batch size 64). The token embeddings were NOT updated.
For the initial word2vec weights with Gensim see: [https://huggingface.co/vocab-transformers/distilbert-word2vec_256k-MLM_1M/tree/main/word2vec](https://huggingface.co/vocab-transformers/distilbert-word2vec_256k-MLM_1M/tree/main/word2vec)
|
maveriq/lingbert-base-32k | bd816fcc8f80936a2c7b49d14cdaf595ed43ece3 | 2022-04-11T17:16:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | maveriq | null | maveriq/lingbert-base-32k | 5 | null | transformers | 17,050 | Entry not found |
adache/distilbert-base-uncased-finetuned-emotion | 368fca48c897a445f86c2786ea25f704c56d15d7 | 2022-04-12T07:48:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | adache | null | adache/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 17,051 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.9245
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8398 | 1.0 | 250 | 0.3276 | 0.9005 | 0.8966 |
| 0.2541 | 2.0 | 500 | 0.2270 | 0.9245 | 0.9249 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
philschmid/tiny-random-wav2vec2 | 5c8f88769434b22d60a6c6cc848e56677141eef7 | 2022-04-12T06:14:01.000Z | [
"pytorch",
"tf",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | philschmid | null | philschmid/tiny-random-wav2vec2 | 5 | null | transformers | 17,052 | Entry not found |
Splend1dchan/wav2vec2-large-10min-lv60-self | 287652e731abad05cd3c57b9d10dce15aedc18d4 | 2022-05-30T04:37:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.11430",
"arxiv:2006.11477",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-10min-lv60-self | 5 | null | transformers | 17,053 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-large-10min-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
args: en
metrics:
- name: Test WER
type: wer
value: None
---
# Wav2Vec2-Large-10min-Lv60 + Self-Training
# This is a direct state_dict transfer from fairseq to huggingface, the weights are identical
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 10min of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
They show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("Splend1dchan/wav2vec2-large-10min-lv60-self")
model = Wav2Vec2ForCTC.from_pretrained("Splend1dchan/wav2vec2-large-10min-lv60-self")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate facebook's **Splend1dchan/wav2vec2-large-10min-lv60-self** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("Splend1dchan/wav2vec2-large-10min-lv60-self").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("Splend1dchan/wav2vec2-large-10min-lv60-self")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
<!-- *Result (WER)*:
| "clean" | "other" |
|---|---|
| untested | untested | --> |
conviette/korPolBERT | 75737219014da62a0fc94f43ddc61f526d4ba6b7 | 2022-04-25T04:00:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | conviette | null | conviette/korPolBERT | 5 | null | transformers | 17,054 |
---
license: apache-2.0
---
This model is a binary classifier developed to analyze comment authorship patterns on Korean news articles.
For further details, refer to our paper on Journalism: [News comment sections and online echo chambers: The ideological alignment between partisan news stories and their user comments](https://journals.sagepub.com/doi/full/10.1177/14648849211069241)
* This model is a BERT classification model to classify Korean user generated comments into binary labels of liberal or conservative.
* This model was trained on approximately 37,000 user generated comments collected from NAVER\'s news portal. The dataset was collected in 2019; as such, note that comments related to recent political topics might not be classified correctly.
* This model is a finetuned model based on ETRI\'s KorBERT.
### How to use
* The model requires an edited version of the transformers class `BertTokenizer`, which can be found in the file `KorBertTokenizer.py`.
* Usage example:
~~~python
from KorBertTokenizer import KorBertTokenizer
from transformers import BertForSequenceClassification
import torch
tokenizer = KorBertTokenizer.from_pretrained('conviette/korPolBERT')
model = BertForSequenceClassification.from_pretrained('conviette/korPolBERT')
def classify(text):
inputs = tokenizer(text, padding='max_length', max_length=70, return_tensors='pt')
with torch.no_grad():
logits=model(**inputs).logits
predicted_class_id = logits.argmax().item()
return model.config.id2label[predicted_class_id]
input_strings = ['좌파가 나라 경제 안보 말아먹는다',
'수꼴들은 나라 일본한테 팔아먹었냐']
for input_string in input_strings:
print('===\n입력 텍스트: {}\n분류 결과: {}\n==='.format(input_string, classify(input_string)))
~~~
### Model performance
* Accuracy: 0.8322
* F1-Score: 0.8322
* For further technical details on the model, refer to our paper for the W-NUT workshop (EMNLP 2019), [The Fallacy of Echo Chambers: Analyzing the Political Slants of User-Generated News Comments in Korean Media](https://aclanthology.org/D19-5548/).
|
Jatin-WIAI/tamil_relevance_clf | 9138ffeb31e6b62e459659d295031015df4dcfbe | 2022-04-12T10:05:33.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Jatin-WIAI | null | Jatin-WIAI/tamil_relevance_clf | 5 | null | transformers | 17,055 | Entry not found |
CenIA/bert-base-spanish-wwm-uncased-finetuned-qa-sqac | 19c252ec0fb39e6fa584b225fc0b25cc5242aac0 | 2022-04-13T14:42:37.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/bert-base-spanish-wwm-uncased-finetuned-qa-sqac | 5 | null | transformers | 17,056 | Entry not found |
Xuan-Rui/pet-10-all | 51e389a4c414b49ff55a02474a91677a9d0acdc3 | 2022-04-13T05:46:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-10-all | 5 | null | transformers | 17,057 | Entry not found |
Xuan-Rui/pet-1000-p4 | bcc1032294fa85e32f75d8b4ec5f28198a4001be | 2022-04-13T07:00:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-1000-p4 | 5 | null | transformers | 17,058 | Entry not found |
Xuan-Rui/pet-1000-all | 43dc21f9c03cba614d400496e1d2d8059a4d66d7 | 2022-04-13T07:06:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-1000-all | 5 | null | transformers | 17,059 | Entry not found |
raquiba/distilbert-base-uncased-finetuned-ner | 53ff8704380e4fb11bc807d2345acb507aeb4e34 | 2022-04-14T11:42:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | raquiba | null | raquiba/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 17,060 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9213829552961903
- name: Recall
type: recall
value: 0.9361226087929299
- name: F1
type: f1
value: 0.9286943010931691
- name: Accuracy
type: accuracy
value: 0.9831604365577391
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0619
- Precision: 0.9214
- Recall: 0.9361
- F1: 0.9287
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2399 | 1.0 | 878 | 0.0738 | 0.9084 | 0.9178 | 0.9131 | 0.9793 |
| 0.0555 | 2.0 | 1756 | 0.0610 | 0.9207 | 0.9340 | 0.9273 | 0.9825 |
| 0.0305 | 3.0 | 2634 | 0.0619 | 0.9214 | 0.9361 | 0.9287 | 0.9832 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
vabadeh213/autotrain-iine_classification10-737422470 | 285f9e2f30a4fc68b3cebef1c7d05c985b079522 | 2022-04-13T09:24:04.000Z | [
"pytorch",
"bert",
"text-classification",
"ja",
"dataset:vabadeh213/autotrain-data-iine_classification10",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | vabadeh213 | null | vabadeh213/autotrain-iine_classification10-737422470 | 5 | null | transformers | 17,061 | ---
tags: autotrain
language: ja
widget:
- text: "RustでWebAssemblyインタプリタを作った話+webassembly+rust"
- text: "Goのロギングライブラリ 2021年冬 golang library logging go"
- text: "VimとTUIツールをなめらかに切り替える ranger tig git vim"
datasets:
- vabadeh213/autotrain-data-iine_classification10
co2_eq_emissions: 7.351885824089346
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 737422470
- CO2 Emissions (in grams): 7.351885824089346
## Validation Metrics
- Loss: 0.39456263184547424
- Accuracy: 0.8279088689991864
- Precision: 0.6869806094182825
- Recall: 0.17663817663817663
- AUC: 0.7937892215111646
- F1: 0.2810198300283286
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vabadeh213/autotrain-iine_classification10-737422470
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("vabadeh213/autotrain-iine_classification10-737422470", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("vabadeh213/autotrain-iine_classification10-737422470", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
cometrain/fake-news-detector-t5 | 6b80e1081e1f444f24f4293d58503ec67c5c5244 | 2022-04-13T11:57:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:fake-and-real-news-dataset",
"transformers",
"Cometrain AutoCode",
"Cometrain AlphaML",
"autotrain_compatible"
] | text2text-generation | false | cometrain | null | cometrain/fake-news-detector-t5 | 5 | null | transformers | 17,062 | ---
language:
- en
tags:
- Cometrain AutoCode
- Cometrain AlphaML
datasets:
- fake-and-real-news-dataset
widget:
- text: "Former FBI Agent: We've never been to the moon"
example_title: "Apollo program misinformation"
- text: "Finland to make decision on NATO membership in coming weeks"
example_title: "Article from Reuters about Finland & NATO"
inference:
parameters:
top_p: 0.9
temperature: 0.5
---
# fake-news-detector-t5
This model has been automatically fine-tuned and tested as part of the development of the GPT-2-based AutoML framework for accelerated and easy development of NLP enterprise solutions. Fine-tuned [T5](https://huggingface.co/t5-base) allows to recognize fake news and misinformation.
Automatically trained on [Fake and real news dataset(2017)](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) dataset.
## Made with Cometrain AlphaML & AutoCode
This model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode
## Cometrain AlphaML command
```shell
$ cometrain create --name fake-news-detector --model auto --task 'Finetune the machine learning model for recognizing fake news' --output transformers
```
|
Helsinki-NLP/opus-mt-tc-big-en-lt | 60f0ff0262c5f85d1b924328e123cb8ae1e5590a | 2022-06-01T13:03:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"lt",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-lt | 5 | null | transformers | 17,063 | ---
language:
- en
- lt
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-lt
results:
- task:
name: Translation eng-lit
type: translation
args: eng-lit
dataset:
name: flores101-devtest
type: flores_101
args: eng lit devtest
metrics:
- name: BLEU
type: bleu
value: 28.0
- task:
name: Translation eng-lit
type: translation
args: eng-lit
dataset:
name: newsdev2019
type: newsdev2019
args: eng-lit
metrics:
- name: BLEU
type: bleu
value: 26.6
- task:
name: Translation eng-lit
type: translation
args: eng-lit
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-lit
metrics:
- name: BLEU
type: bleu
value: 39.5
- task:
name: Translation eng-lit
type: translation
args: eng-lit
dataset:
name: newstest2019
type: wmt-2019-news
args: eng-lit
metrics:
- name: BLEU
type: bleu
value: 17.5
---
# opus-mt-tc-big-en-lt
Neural machine translation model for translating from English (en) to Lithuanian (lt).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): eng
* target language(s): lit
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lit/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT eng-lit README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-lit/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"A cat was sitting on the chair.",
"Yukiko likes potatoes."
]
model_name = "pytorch-models/opus-mt-tc-big-en-lt"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Katė sėdėjo ant kėdės.
# Jukiko mėgsta bulves.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-lt")
print(pipe("A cat was sitting on the chair."))
# expected output: Katė sėdėjo ant kėdės.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lit/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-lit/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-lit | tatoeba-test-v2021-08-07 | 0.67434 | 39.5 | 2528 | 14942 |
| eng-lit | flores101-devtest | 0.59593 | 28.0 | 1012 | 20695 |
| eng-lit | newsdev2019 | 0.58444 | 26.6 | 2000 | 39627 |
| eng-lit | newstest2019 | 0.51559 | 17.5 | 998 | 19711 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 17:42:39 EEST 2022
* port machine: LM0-400-22516.local
|
dbounds/roberta-large-finetuned-clinc | 9ac07f17ca93990c893a1603bf2fab16ff812375 | 2022-04-13T16:30:14.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | dbounds | null | dbounds/roberta-large-finetuned-clinc | 5 | null | transformers | 17,064 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9741935483870968
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1594
- Accuracy: 0.9742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0651 | 1.0 | 120 | 5.0213 | 0.0065 |
| 4.2482 | 2.0 | 240 | 2.5682 | 0.7997 |
| 1.694 | 3.0 | 360 | 0.6019 | 0.9445 |
| 0.4594 | 4.0 | 480 | 0.2330 | 0.9655 |
| 0.1599 | 5.0 | 600 | 0.1594 | 0.9742 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ASCCCCCCCC/PENGMENGJIE-finetuned-sms | 5b22afa690dd3f81adf4e269eb90882ab65c3f23 | 2022-04-14T07:57:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | ASCCCCCCCC | null | ASCCCCCCCC/PENGMENGJIE-finetuned-sms | 5 | null | transformers | 17,065 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PENGMENGJIE-finetuned-sms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PENGMENGJIE-finetuned-sms
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0116 | 1.0 | 1250 | 0.0060 | 0.999 | 0.9990 |
| 0.003 | 2.0 | 2500 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Srini99/TQA | e937ad55866ba1d7246c04239018850cee70a16d | 2022-04-14T13:15:13.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"multilingual",
"Tamil",
"dataset:squad v2",
"dataset:chaii",
"dataset:mlqa",
"dataset:xquad",
"transformers",
"autotrain_compatible"
] | question-answering | false | Srini99 | null | Srini99/TQA | 5 | null | transformers | 17,066 | ---
language:
- multilingual
- Tamil
tags:
- question-answering
datasets:
- squad v2
- chaii
- mlqa
- xquad
metrics:
- Exact Match
- F1
widget:
- text: "சென்னையில் எத்தனை மக்கள் வாழ்கின்றனர்?"
context: "சென்னை (Chennai) தமிழ்நாட்டின் தலைநகரமும் இந்தியாவின் நான்காவது பெரிய நகரமும் ஆகும். 1996 ஆம் ஆண்டுக்கு முன்னர் இந்நகரம் மெட்ராஸ் (Madras) என்று அழைக்கப்பட்டு வந்தது. சென்னை, வங்காள விரிகுடாவின் கரையில் அமைந்த துறைமுக நகரங்களுள் ஒன்று. சுமார் 10 மில்லியன் (ஒரு கோடி) மக்கள் வாழும் இந்நகரம், உலகின் 35 பெரிய மாநகரங்களுள் ஒன்று. 17ஆம் நூற்றாண்டில் ஆங்கிலேயர் சென்னையில் கால் பதித்தது முதல், சென்னை நகரம் ஒரு முக்கிய நகரமாக வளர்ந்து வந்திருக்கிறது. சென்னை தென்னிந்தியாவின் வாசலாகக் கருதப்படுகிறது. சென்னை நகரில் உள்ள மெரினா கடற்கரை உலகின் நீளமான கடற்கரைகளுள் ஒன்று. சென்னை கோலிவுட் (Kollywood) என அறியப்படும் தமிழ்த் திரைப்படத் துறையின் தாயகம். பல விளையாட்டு அரங்கங்கள் உள்ள சென்னையில் பல விளையாட்டுப் போட்டிகளும் நடைபெறுகின்றன."
example_title: "Question Answering"
---
# XLM-RoBERTa Large trained on Dravidian Language QA
## Overview
**Language model:** XLM-RoBERTa-lg
**Language:** Multilingual, focussed on Tamil & Hindi
**Downstream-task:** Extractive QA
**Eval data:** K-Fold on Training Data
## Hyperparameters
```
batch_size = 4
base_LM_model = "xlm-roberta-large"
learning_rate = 1e-5
optimizer = AdamW
weight_decay = 1e-2
epsilon = 1e-8
max_grad_norm = 1.0
lr_schedule = LinearWarmup
warmup_proportion = 0.2
max_seq_len = 256
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on our human annotated dataset with 1000 tamil question-context pairs [link]
```
"em": 77.536,
"f1": 85.665
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "Srini99/FYP_TamilQA"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'யாரால் பொங்கல் சிறப்பாகக் கொண்டாடப்படுகிறது?',
'context': 'பொங்கல் என்பது தமிழர்களால் சிறப்பாகக் கொண்டாடப்படும் ஓர் அறுவடைப் பண்டிகை ஆகும்.'
}
res = nlp(QA_input)
``` |
achyut/patronizing_detection | 0103f058fe766023975e00a01310eb32ef37ded9 | 2022-04-21T05:18:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | achyut | null | achyut/patronizing_detection | 5 | 0 | transformers | 17,067 | This model is fine tuned for Patronizing and Condescending Language Classification task. Have fun. |
brad1141/oldData_BERT | 715d5c134c0780bb1d359cdb59d3fa7b4a8d7fb9 | 2022-04-14T21:27:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | brad1141 | null | brad1141/oldData_BERT | 5 | null | transformers | 17,068 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: oldData_BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oldData_BERT
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2348 | 1.0 | 1125 | 1.0185 |
| 1.0082 | 2.0 | 2250 | 0.7174 |
| 0.699 | 3.0 | 3375 | 0.3657 |
| 0.45 | 4.0 | 4500 | 0.1880 |
| 0.2915 | 5.0 | 5625 | 0.1140 |
| 0.2056 | 6.0 | 6750 | 0.0708 |
| 0.1312 | 7.0 | 7875 | 0.0616 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
agdsga/chinese-bert-wwm-finetuned-product-1 | 97997ec5b14c5525208f5c4af79cdb5ed76e4285 | 2022-04-15T06:06:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | agdsga | null | agdsga/chinese-bert-wwm-finetuned-product-1 | 5 | null | transformers | 17,069 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: chinese-bert-wwm-finetuned-product-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-bert-wwm-finetuned-product-1
This model is a fine-tuned version of [hfl/chinese-bert-wwm](https://huggingface.co/hfl/chinese-bert-wwm) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0000
- eval_runtime: 10.6737
- eval_samples_per_second: 362.572
- eval_steps_per_second: 5.715
- epoch: 11.61
- step: 18797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
malcolm/REA_GenderIdentification_v1 | 49ddc67ea8eae13a4c404d2d5493899122954fc9 | 2022-04-15T08:38:29.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | malcolm | null | malcolm/REA_GenderIdentification_v1 | 5 | null | transformers | 17,070 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: REA_GenderIdentification_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# REA_GenderIdentification_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3366
- Accuracy: 0.8798
- F1: 0.8522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
MartinoMensio/racism-models-m-vote-strict-epoch-3 | 2743b83c880f0ab25c1090ad95db14413ba114c5 | 2022-05-04T16:09:42.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
] | text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-m-vote-strict-epoch-3 | 5 | null | transformers | 17,071 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `m-vote-strict-epoch-3`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'm-vote-strict-epoch-3'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.9929012656211853}, {'label': 'non-racist', 'score': 0.5616322159767151}]
```
For more details, see https://github.com/preyero/neatclass22
|
MartinoMensio/racism-models-w-m-vote-strict-epoch-2 | 5fa46b2b78381972e4dbc23fa3cf863cd648e457 | 2022-05-04T16:25:07.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
] | text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-w-m-vote-strict-epoch-2 | 5 | null | transformers | 17,072 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `w-m-vote-strict-epoch-2`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'w-m-vote-strict-epoch-2'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.8647435903549194}, {'label': 'non-racist', 'score': 0.9660486578941345}]
```
For more details, see https://github.com/preyero/neatclass22
|
MartinoMensio/racism-models-w-m-vote-strict-epoch-3 | cb7b4608fd92c4fc1d0ee0e2808dc62e292be938 | 2022-05-04T16:26:07.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
] | text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-w-m-vote-strict-epoch-3 | 5 | null | transformers | 17,073 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `w-m-vote-strict-epoch-3`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'w-m-vote-strict-epoch-3'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.9619585871696472}, {'label': 'non-racist', 'score': 0.9396700859069824}]
```
For more details, see https://github.com/preyero/neatclass22
|
MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2 | 6d37dde0ff7297036ab038e10a15c94c6670fc3f | 2022-05-04T16:28:04.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
] | text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2 | 5 | null | transformers | 17,074 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `w-m-vote-nonstrict-epoch-2`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'w-m-vote-nonstrict-epoch-2'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.9680026173591614}, {'label': 'non-racist', 'score': 0.9936750531196594}]
```
For more details, see https://github.com/preyero/neatclass22
|
aseifert/comma-xlm-roberta-base | 0fda25343a38aab69e6c87c0eb1cba45649b7455 | 2022-04-15T21:08:32.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | aseifert | null | aseifert/comma-xlm-roberta-base | 5 | null | transformers | 17,075 | Entry not found |
jason9693/klue-roberta-small-apeach | e9436f2c8c0cd0348c8b5d067503faf9dca09c2f | 2022-04-16T14:21:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"ko",
"dataset:jason9693/APEACH",
"transformers"
] | text-classification | false | jason9693 | null | jason9693/klue-roberta-small-apeach | 5 | null | transformers | 17,076 | ---
language: ko
widget:
- text: "응 어쩔티비~~"
datasets:
- jason9693/APEACH
--- |
Raychanan/bert-bert-cased-first512-Conflict-SEP | 2d880a94115152ff96da443e7d340e7a47f1298f | 2022-04-16T19:16:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Raychanan | null | Raychanan/bert-bert-cased-first512-Conflict-SEP | 5 | null | transformers | 17,077 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: bert-bert-cased-first512-Conflict-SEP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-bert-cased-first512-Conflict-SEP
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6806
- F1: 0.6088
- Accuracy: 0.5914
- Precision: 0.5839
- Recall: 0.6360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|
| 0.7027 | 1.0 | 685 | 0.6956 | 0.6018 | 0.5365 | 0.5275 | 0.7003 |
| 0.7009 | 2.0 | 1370 | 0.6986 | 0.6667 | 0.5 | 0.5 | 1.0 |
| 0.7052 | 3.0 | 2055 | 0.6983 | 0.6667 | 0.5 | 0.5 | 1.0 |
| 0.6987 | 4.0 | 2740 | 0.6830 | 0.5235 | 0.5636 | 0.5764 | 0.4795 |
| 0.6761 | 5.0 | 3425 | 0.6806 | 0.6088 | 0.5914 | 0.5839 | 0.6360 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
clapika2010/hospital_detection | 9b6739319ae49c2bfab53485b27293d0bbdfb4dc | 2022-04-18T05:57:28.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | clapika2010 | null | clapika2010/hospital_detection | 5 | null | transformers | 17,078 | Entry not found |
EandrewJones/distilbert-base-uncased-finetuned-mediations | cdab35f9479c380e7a8b253bf46571f92c594dc4 | 2022-04-18T20:09:53.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | EandrewJones | null | EandrewJones/distilbert-base-uncased-finetuned-mediations | 5 | null | transformers | 17,079 | Entry not found |
dapang/tqa_s2s | 025a2b17baf26628edbbc718120a62fc424a2ff6 | 2022-04-17T06:11:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | dapang | null | dapang/tqa_s2s | 5 | null | transformers | 17,080 | ---
license: mit
---
|
Cheltone/TESTING | 4df97526c67ab81b26f1ce81ef2ff612d6afe011 | 2022-04-19T01:19:34.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Cheltone | null | Cheltone/TESTING | 5 | null | transformers | 17,081 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- accuracy
- f1
model-index:
- name: TESTING
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TESTING
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1167
- Precision: 0.9561
- Accuracy: 0.9592
- F1: 0.9592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:--------:|:------:|
| 0.5903 | 0.4 | 500 | 0.4695 | 0.7342 | 0.7728 | 0.7890 |
| 0.3986 | 0.8 | 1000 | 0.3469 | 0.8144 | 0.8596 | 0.8684 |
| 0.2366 | 1.2 | 1500 | 0.1939 | 0.9313 | 0.9260 | 0.9253 |
| 0.1476 | 1.6 | 2000 | 0.1560 | 0.9207 | 0.9452 | 0.9465 |
| 0.1284 | 2.0 | 2500 | 0.1167 | 0.9561 | 0.9592 | 0.9592 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
benjaminbeilharz/t5-empatheticdialogues | 3d26e8503e6747d3b81676bb60a71cce8de57c70 | 2022-04-17T22:14:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | benjaminbeilharz | null | benjaminbeilharz/t5-empatheticdialogues | 5 | null | transformers | 17,082 | Entry not found |
jason9693/kcelectra-v2022-dev-apeach | f6f4acfaf090d6f9c25ff08ff81ad0fcc2583c8c | 2022-04-18T02:33:41.000Z | [
"pytorch",
"electra",
"text-classification",
"ko",
"dataset:jason9693/APEACH",
"transformers"
] | text-classification | false | jason9693 | null | jason9693/kcelectra-v2022-dev-apeach | 5 | 1 | transformers | 17,083 | ---
language: ko
widget:
- text: "코딩을 🐶🍾👟같이 하니까 맨날 장애나잖아 이 🧑🦽아"
datasets:
- jason9693/APEACH
--- |
crcb/dvs_f | 08c846b56dca291c67d749591ba93ea4f6faae28 | 2022-04-18T13:44:09.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:crcb/autotrain-data-dvs",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | crcb | null | crcb/dvs_f | 5 | null | transformers | 17,084 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-dvs
co2_eq_emissions: 8.758858538967111
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 753223045
- CO2 Emissions (in grams): 8.758858538967111
## Validation Metrics
- Loss: 0.14833936095237732
- Accuracy: 0.9471454508775469
- Precision: 0.5045871559633027
- Recall: 0.4166666666666667
- AUC: 0.8806422686270332
- F1: 0.4564315352697096
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-dvs-753223045
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-dvs-753223045", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-dvs-753223045", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
manueldeprada/ctTREC-distillbert-correct-classifier-trec2020 | e11637f3d632e378283e33921e7a5c719fdd1616 | 2022-04-18T14:17:15.000Z | [
"pytorch",
"jax",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | manueldeprada | null | manueldeprada/ctTREC-distillbert-correct-classifier-trec2020 | 5 | null | transformers | 17,085 | Entry not found |
Gunulhona/tbnymodel_v2 | 13978e772cb2df4e049e1ed9fbf78d1c17212ac2 | 2022-04-18T15:37:58.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | Gunulhona | null | Gunulhona/tbnymodel_v2 | 5 | null | transformers | 17,086 | Entry not found |
ucabqfe/roberta_PER_io | 7d0e91b96161acdc40d01b6a700ad83b936f49e6 | 2022-04-18T17:56:28.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ucabqfe | null | ucabqfe/roberta_PER_io | 5 | null | transformers | 17,087 | Entry not found |
ShihTing/QA_Leave | 3c0226aacac639fc4113c04b882cc0a944ad0e78 | 2022-04-19T03:41:36.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"transformers",
"autonlp"
] | text-classification | false | ShihTing | null | ShihTing/QA_Leave | 5 | null | transformers | 17,088 | # Title 自製QA請假版
---
tags: autonlp
language: unk
widget:
- text: "如果我想請特休,要怎麼使用"
- text: "我想請事假"
---
自製QA請假版
訓練與驗證分開
訓練筆67驗證筆23,總類別23,也就是驗證資料每一類各一測試
驗證acc=1.0
|
xInsignia/autotrain-Online_orders-755323156 | 2807a6e6308fc286a005384da2308862bde1fa6c | 2022-04-19T03:29:18.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:xInsignia/autotrain-data-Online_orders-5cf92320",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | xInsignia | null | xInsignia/autotrain-Online_orders-755323156 | 5 | null | transformers | 17,089 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- xInsignia/autotrain-data-Online_orders-5cf92320
co2_eq_emissions: 2.4120667129093043
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 755323156
- CO2 Emissions (in grams): 2.4120667129093043
## Validation Metrics
- Loss: 0.17826060950756073
- Accuracy: 0.9550898203592815
- Macro F1: 0.8880388927888968
- Micro F1: 0.9550898203592815
- Weighted F1: 0.9528256324309916
- Macro Precision: 0.9093073732635162
- Micro Precision: 0.9550898203592815
- Weighted Precision: 0.9533674643333371
- Macro Recall: 0.8872729481745715
- Micro Recall: 0.9550898203592815
- Weighted Recall: 0.9550898203592815
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/xInsignia/autotrain-Online_orders-755323156
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("xInsignia/autotrain-Online_orders-755323156", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("xInsignia/autotrain-Online_orders-755323156", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
uaritm/lik_neuro_202 | ec8eb5c47b487ec1fb4aab3a54dae17ce15aee76 | 2022-05-11T09:19:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"uk",
"transformers",
"russian",
"ukrainian",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | uaritm | null | uaritm/lik_neuro_202 | 5 | null | transformers | 17,090 | ---
language: ["ru", "uk"]
tags:
- russian
- ukrainian
license: mit
---
#
The model was trained on the Russian-Ukrainian dataset. Questions-answers of medical subjects (neurology-psychotherapy).
The model is not a medical application and it is strongly discouraged to use the model for medical purposes! |
anshr/t5-base_supervised_baseline_01 | aa6c0ffca47b57aa57f38ac0ab3c63f9bc9d23e1 | 2022-04-19T20:52:06.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | anshr | null | anshr/t5-base_supervised_baseline_01 | 5 | null | transformers | 17,091 | Entry not found |
Aldraz/distilbert-base-uncased-finetuned-emotion | 99128201f415ea496562f203897103ea524ab163 | 2022-04-20T02:04:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Aldraz | null | Aldraz/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 17,092 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2319
- Accuracy: 0.921
- F1: 0.9214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3369 | 0.8985 | 0.8947 |
| No log | 2.0 | 500 | 0.2319 | 0.921 | 0.9214 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1+cpu
- Datasets 2.1.0
- Tokenizers 0.11.6
|
eslamxm/mT5_multilingual_XLSum-finetuned-ar-wikilingua | fe055602107c713e31599b1c8b4c0e1ef2afc753 | 2022-04-20T18:31:30.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/mT5_multilingual_XLSum-finetuned-ar-wikilingua | 5 | null | transformers | 17,093 | ---
tags:
- summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mT5_multilingual_XLSum-finetuned-ar-wikilingua
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-ar-wikilingua
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6903
- Rouge-1: 24.47
- Rouge-2: 7.69
- Rouge-l: 20.04
- Gen Len: 39.64
- Bertscore: 72.63
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.4406 | 1.0 | 5111 | 3.9582 | 22.35 | 6.84 | 18.39 | 34.78 | 71.94 |
| 4.0158 | 2.0 | 10222 | 3.8316 | 22.87 | 7.24 | 18.92 | 34.7 | 71.99 |
| 3.8626 | 3.0 | 15333 | 3.7695 | 23.65 | 7.5 | 19.6 | 35.53 | 72.31 |
| 3.7626 | 4.0 | 20444 | 3.7313 | 24.01 | 7.59 | 19.68 | 38.16 | 72.41 |
| 3.6934 | 5.0 | 25555 | 3.7118 | 24.37 | 7.77 | 19.93 | 39.36 | 72.47 |
| 3.6421 | 6.0 | 30666 | 3.7016 | 24.48 | 7.8 | 20.07 | 38.58 | 72.58 |
| 3.6073 | 7.0 | 35777 | 3.6907 | 24.31 | 7.83 | 20.13 | 38.07 | 72.5 |
| 3.5843 | 8.0 | 40888 | 3.6903 | 24.55 | 7.88 | 20.2 | 38.33 | 72.6 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
frozenwalker/T5_pubmedqa_question_generation_preTrained_MedQuad | 2d699e987ea3403edbfd844e292a6829574cfba0 | 2022-04-20T12:23:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | frozenwalker | null | frozenwalker/T5_pubmedqa_question_generation_preTrained_MedQuad | 5 | null | transformers | 17,094 | Entry not found |
sanime/distilbert-base-uncased-finetuned-emotion | 1bb515855710f5fcdbcce468ac09a9e163d30e9a | 2022-04-20T13:14:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | sanime | null | sanime/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 17,095 | Entry not found |
Jeevesh8/feather_berts_19 | d426c30c8b842d8acff6db4ae6f2f2f023162824 | 2022-04-20T13:21:06.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_19 | 5 | null | transformers | 17,096 | Entry not found |
Jeevesh8/feather_berts_20 | 2a4e8e5088972809b1bf61cad4940bbcd1125450 | 2022-04-20T13:21:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_20 | 5 | null | transformers | 17,097 | Entry not found |
Jeevesh8/feather_berts_28 | 599554709d6daad126b27e71aeb32d1c923db2a7 | 2022-04-20T13:24:59.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_28 | 5 | null | transformers | 17,098 | Entry not found |
afbudiman/indobert-distilled-optimized-for-classification | f1e71ca07706aba3ef783969016e88531d8928f1 | 2022-04-20T13:59:48.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:indonlu",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | afbudiman | null | afbudiman/indobert-distilled-optimized-for-classification | 5 | null | transformers | 17,099 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- f1
model-index:
- name: indobert-distilled-optimized-for-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9023809523809524
- name: F1
type: f1
value: 0.9020516403647337
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-distilled-optimized-for-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5991
- Accuracy: 0.9024
- F1: 0.9021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.262995179171344e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.2938 | 1.0 | 688 | 0.8433 | 0.8484 | 0.8513 |
| 0.711 | 2.0 | 1376 | 0.6408 | 0.8881 | 0.8878 |
| 0.4416 | 3.0 | 2064 | 0.7964 | 0.8794 | 0.8793 |
| 0.2907 | 4.0 | 2752 | 0.7559 | 0.8897 | 0.8900 |
| 0.2065 | 5.0 | 3440 | 0.6892 | 0.8968 | 0.8974 |
| 0.1574 | 6.0 | 4128 | 0.6881 | 0.8913 | 0.8906 |
| 0.1131 | 7.0 | 4816 | 0.6224 | 0.8984 | 0.8982 |
| 0.0865 | 8.0 | 5504 | 0.6312 | 0.8976 | 0.8970 |
| 0.0678 | 9.0 | 6192 | 0.6187 | 0.8992 | 0.8989 |
| 0.0526 | 10.0 | 6880 | 0.5991 | 0.9024 | 0.9021 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.