modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mateocolina/xlm-roberta-base-finetuned-marc-en | d5ab585a22012b4864dc8ae6206333c553666f2d | 2021-12-16T14:39:14.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mateocolina | null | mateocolina/xlm-roberta-base-finetuned-marc-en | 5 | null | transformers | 16,700 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9276
- Mae: 0.5366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0992 | 1.0 | 235 | 0.9340 | 0.5122 |
| 0.945 | 2.0 | 470 | 0.9276 | 0.5366 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mathew/layoutlmv2-finetuned-funsd-1024 | f856d7dc1d534bc1f6b39ef5c152aedefa4b8d36 | 2021-10-24T06:13:48.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | mathew | null | mathew/layoutlmv2-finetuned-funsd-1024 | 5 | null | transformers | 16,701 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-funsd-1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-1024
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 1.14.0
- Tokenizers 0.10.3
|
matprado/DialoGPT-small-rick-sanchez | 3d3ebace7dab9e0494d7ee60686aeea392573c00 | 2021-07-09T17:01:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | matprado | null | matprado/DialoGPT-small-rick-sanchez | 5 | null | transformers | 16,702 | ---
tags:
- conversational
---
# GPT |
mattchurgin/bert-finetuned-ner | 39d485afdf7c51299e577338c2b5be3c43dd3652 | 2022-01-20T22:16:43.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mattchurgin | null | mattchurgin/bert-finetuned-ner | 5 | null | transformers | 16,703 | Entry not found |
maxxx2021/DialGPT-small-harrypotter | 85166a329282564fb6e2a517c7c0573e9db41eae | 2021-09-13T22:43:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | maxxx2021 | null | maxxx2021/DialGPT-small-harrypotter | 5 | null | transformers | 16,704 | ---
tags:
- conversational
---
#Harry Potter DialGPT Model |
mbateman/bert-finetuned-ner | 9171971eebce47788f5df15a53185b40f502dee8 | 2022-01-04T20:30:26.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | mbateman | null | mbateman/bert-finetuned-ner | 5 | null | transformers | 16,705 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9333553828344634
- name: Recall
type: recall
value: 0.9498485358465163
- name: F1
type: f1
value: 0.9415297355909584
- name: Accuracy
type: accuracy
value: 0.9868281627126626
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0622
- Precision: 0.9334
- Recall: 0.9498
- F1: 0.9415
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0881 | 1.0 | 1756 | 0.0683 | 0.9136 | 0.9322 | 0.9228 | 0.9826 |
| 0.0383 | 2.0 | 3512 | 0.0641 | 0.9277 | 0.9456 | 0.9366 | 0.9854 |
| 0.0229 | 3.0 | 5268 | 0.0622 | 0.9334 | 0.9498 | 0.9415 | 0.9868 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.1
|
mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo | 838ccbada6753782040cdc0958648e2930f37cb4 | 2021-11-25T09:04:00.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"ig",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
] | token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo | 5 | null | transformers | 16,706 | ---
language:
- ig
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Ike ịda jụụ otụ nkeji banyere oke ogbugbu na - eme n'ala Naijiria agwụla Ekweremmadụ"
---
# xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-igbo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Igbo part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo) (This model) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | ibo | 88.39 | 87.08 | 89.74 | 74.00 | 91.00 | 90.00 | 91.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-igbo) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | ibo | 84.93 | 83.63 | 86.26 | 70.00 | 88.00 | 89.00 | 84.00 |
| [xlm-roberta-base-finetuned-ner-igbo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-igbo) | [base](https://huggingface.co/xlm-roberta-base) | ibo | 86.06 | 85.20 | 86.94 | 76.00 | 86.00 | 90.00 | 87.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-igbo'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ike ịda jụụ otụ nkeji banyere oke ogbugbu na - eme n'ala Naijiria agwụla Ekweremmadụ"
ner_results = nlp(example)
print(ner_results)
```
|
mbien/fdh-wikibio | bb63af9dcc878d3e89c75bef4caabd6da20b2df9 | 2021-05-23T08:55:05.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mbien | null | mbien/fdh-wikibio | 5 | null | transformers | 16,707 | # fdh-wikibio
Model used to prepare Biography Generator for EPFL Foundations of Digital Humanities course.
## Project description
Please read our report on FDH page: http://fdh.epfl.ch/index.php/WikiBio
## Project result
You're invited to read through our generated biographies!
https://wikibio.mbien.pl/ |
mbien/fma2vec | 574966ab4cb292e0ffbfddeb9feb9054cc90453d | 2021-07-06T12:35:57.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
] | feature-extraction | false | mbien | null | mbien/fma2vec | 5 | null | transformers | 16,708 | # Predicting music popularity using DNNs
This is a pre-trained wav2vec2.0 model, trained on a fill Free Music Archive repository, created as part of DH-401: Digital Musicology class on EPFL
## Team
* Elisa ([email protected])
* Michał ([email protected])
* Noé ([email protected])
## Milestone 3
Main notebook presenting out results is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone3.ipynb)
Notebook describing the details of Wav2Vec2.0 pre-training and fine-tuning for the task is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone3-wav2vec2.ipynb)
## Milestone 2
Exploratory data analysis notebook is available [here](https://nbviewer.jupyter.org/github/Glorf/DH-401/blob/main/milestone2.ipynb)
## Milestone 1
Refined project proposal is available [here](https://github.com/Glorf/DH-401/blob/main/milestone0.md)
## Milestone 0
Original project proposal is available in git history [here](https://github.com/Glorf/DH-401/blob/bb14813ff2bbbd9cdc6b6eecf34c9e3c160598eb/milestone0.md) |
megagonlabs/bimeanvae-yelp | b8702d7b6e88fdae30e8f73c5dfd6cd45ce51f4f | 2021-09-11T00:12:51.000Z | [
"pytorch",
"en",
"transformers",
"summarization",
"license:bsd-3-clause"
] | summarization | false | megagonlabs | null | megagonlabs/bimeanvae-yelp | 5 | 1 | transformers | 16,709 | ---
language: en
tags:
- summarization
inference: false
license: bsd-3-clause
---
## BiMeanVAE model
See original GitHub repo for more details [here](https://github.com/megagonlabs/coop)
|
mfuntowicz/bert-base-cased-finetuned-sst2 | 1b000bd1de50ed79cf3826f9240b87de1fc9c7bb | 2021-05-19T23:19:10.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mfuntowicz | null | mfuntowicz/bert-base-cased-finetuned-sst2 | 5 | null | transformers | 16,710 | Entry not found |
microsoft/deberta-xlarge-v2-mnli | 7042bc565d0fbdf2a4840ff70eeafd057fabec08 | 2021-02-11T02:04:40.000Z | [
"pytorch",
"deberta-v2",
"en",
"transformers",
"deberta",
"license:mit"
] | null | false | microsoft | null | microsoft/deberta-xlarge-v2-mnli | 5 | null | transformers | 16,711 | ---
language: en
tags: deberta
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
## This model is DEPRECATED, please use [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli)
|
microsoft/unispeech-sat-base-sd | b175a5cfa53b3d4f1bcf94fb06d6fc5d1972098b | 2021-12-17T18:39:23.000Z | [
"pytorch",
"unispeech-sat",
"audio-frame-classification",
"en",
"dataset:librispeech_asr",
"arxiv:2110.05752",
"transformers",
"speech"
] | null | false | microsoft | null | microsoft/unispeech-sat-base-sd | 5 | null | transformers | 16,712 | ---
language:
- en
datasets:
- librispeech_asr
tags:
- speech
---
# UniSpeech-SAT-Base for Speaker Diarization
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on:
- 960 hours of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Fine-tuning details
The model is fine-tuned on the [LibriMix dataset](https://github.com/JorisCos/LibriMix) using just a linear layer for mapping the network outputs.
# Usage
## Speaker Diarization
```python
from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-base-sd')
model = UniSpeechSatForAudioFrameClassification.from_pretrained('microsoft/unispeech-sat-base-sd')
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt")
logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
midas/gupshup_h2e_gpt | 4df3a144cd053f17c731cf442039d1402bcfa284 | 2021-11-14T02:08:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:1910.04073",
"transformers"
] | text-generation | false | midas | null | midas/gupshup_h2e_gpt | 5 | null | transformers | 16,713 | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
mikeee/model-zs | 9d67271c32a3fed01c97322bc6d7ec04b090ced1 | 2022-01-23T11:36:38.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | mikeee | null | mikeee/model-zs | 5 | null | transformers | 16,714 | Entry not found |
milyiyo/multi-minilm-finetuned-amazon-review | cb5cf17e58a9e28b42e4cf73b511abff6c850ac7 | 2022-01-16T22:53:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | milyiyo | null | milyiyo/multi-minilm-finetuned-amazon-review | 5 | null | transformers | 16,715 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: multi-minilm-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5422
- name: F1
type: f1
value: 0.543454465221178
- name: Precision
type: precision
value: 0.5452336215624385
- name: Recall
type: recall
value: 0.5422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-minilm-finetuned-amazon-review
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2436
- Accuracy: 0.5422
- F1: 0.5435
- Precision: 0.5452
- Recall: 0.5422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0049 | 1.0 | 2500 | 1.0616 | 0.5352 | 0.5268 | 0.5347 | 0.5352 |
| 0.9172 | 2.0 | 5000 | 1.0763 | 0.5432 | 0.5412 | 0.5444 | 0.5432 |
| 0.8285 | 3.0 | 7500 | 1.1077 | 0.5408 | 0.5428 | 0.5494 | 0.5408 |
| 0.7361 | 4.0 | 10000 | 1.1743 | 0.5342 | 0.5399 | 0.5531 | 0.5342 |
| 0.6538 | 5.0 | 12500 | 1.2436 | 0.5422 | 0.5435 | 0.5452 | 0.5422 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ml6team/robbert-dutch-base-toxic-comments | 0cc82d682443fc2502fa1687656c116a387d737d | 2022-01-20T07:57:36.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:apache-2.0"
] | text-classification | false | ml6team | null | ml6team/robbert-dutch-base-toxic-comments | 5 | 4 | transformers | 16,716 | ---
language:
- nl
tags:
- text-classification
- pytorch
widget:
- text: "Ik heb je lief met heel mijn hart"
example_title: "Non toxic comment 1"
- text: "Dat is een goed punt, zo had ik het nog niet bekeken."
example_title: "Non toxic comment 2"
- text: "Wat de fuck zei je net tegen me, klootzak?"
example_title: "Toxic comment 1"
- text: "Rot op, vuile hoerenzoon."
example_title: "Toxic comment 2"
license: apache-2.0
metrics:
- Accuracy, F1 Score, Recall, Precision
---
# RobBERT-dutch-base-toxic-comments
## Model description:
This model was created with the purpose to detect toxic or potentially harmful comments.
For this model, we finetuned a dutch RobBerta-based model called [RobBERT](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on the translated [Jigsaw Toxicity dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
The original dataset was translated using the appropriate [MariantMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
The model was trained for 2 epochs, on 90% of the dataset, with the following arguments:
```
training_args = TrainingArguments(
learning_rate=1e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
gradient_accumulation_steps=6,
load_best_model_at_end=True,
metric_for_best_model="recall",
epochs=2,
evaluation_strategy="steps",
save_strategy="steps",
save_total_limit=10,
logging_steps=100,
eval_steps=250,
save_steps=250,
weight_decay=0.001,
report_to="wandb")
```
## Model Performance:
Model evaluation was done on 1/10th of the dataset, which served as the test dataset.
| Accuracy | F1 Score | Recall | Precision |
| --- | --- | --- | --- |
| 95.63 | 78.80 | 78.99 | 78.61 |
## Dataset:
Unfortunately we cannot open-source the dataset, since we are bound by the underlying Jigsaw license.
|
mmcquade11/autonlp-imdb-test-21134442 | 373477ad379a424dd616415ea679cffbe8097606 | 2021-10-18T20:16:41.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mmcquade11/autonlp-data-imdb-test",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | mmcquade11 | null | mmcquade11/autonlp-imdb-test-21134442 | 5 | null | transformers | 16,717 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- mmcquade11/autonlp-data-imdb-test
co2_eq_emissions: 298.7849611952843
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 21134442
- CO2 Emissions (in grams): 298.7849611952843
## Validation Metrics
- Loss: 0.21618066728115082
- Accuracy: 0.9393
- Precision: 0.9360730593607306
- Recall: 0.943
- AUC: 0.98362804
- F1: 0.9395237620803029
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mmcquade11/autonlp-imdb-test-21134442
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mmcquade11/autonlp-imdb-test-21134442", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mmcquade11/autonlp-imdb-test-21134442", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
mmoradi/Robust-Biomed-RoBERTa-TextualInference | 955a62e817848392b3d4bb271ef22dbe30487230 | 2021-10-07T13:42:06.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | mmoradi | null | mmoradi/Robust-Biomed-RoBERTa-TextualInference | 5 | null | transformers | 16,718 | Entry not found |
mofawzy/BERT-ASTD | e0b1163d3000957431d094c2d0669a6ff6abe193 | 2022-02-18T22:49:39.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"dataset:ASTD",
"transformers",
"ASTD"
] | text-classification | false | mofawzy | null | mofawzy/BERT-ASTD | 5 | 1 | transformers | 16,719 | ---
language:
- ar
datasets:
- ASTD
tags:
- ASTD
widget:
- text: "العنف والقتل في محيط العالم في زياده يوميا"
- text: "الصداقه تزرع الحياه ازهارا"
---
# BERT-ASTD Balanced
Arabic version bert model fine tuned on ASTD dataset balanced version to identify twitter sentiments in Arabic language MSA dialect .
## Data
The model were fine-tuned on ~1330 tweet in Arabic language.
## Results
| class | precision | recall | f1-score | Support |
|----------|-----------|--------|----------|---------|
| 0 | 0.9328 | 0.9398 | 0.9363 | 133 |
| 1 | 0.9394 | 0.9323 | 0.9358 | 133 |
| Accuracy | | | 0.9361 | 266 |
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name="mofawzy/BERT-ASTD"
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=2)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
|
mofawzy/Bert-hard-balanced | 1cfc9dc5156c0f2240b764a0595a3b587e0156f1 | 2022-02-18T23:29:24.000Z | [
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:HARD",
"transformers",
"HARD"
] | text-classification | false | mofawzy | null | mofawzy/Bert-hard-balanced | 5 | 1 | transformers | 16,720 | ---
language:
- ar
datasets:
- HARD
tags:
- HARD
widget:
- text: "جيد. المكان جميل وهاديء. كل شي جيد ونظيف"
- text: "استغرب تقييم الفندق كخمس نجوم”. لا شي. يستحق"
---
# BERT-ASTD Balanced
Arabic version bert model fine tuned on Hotel Arabic Reviews dataset from booking.com (HARD) dataset balanced version to identify sentiments opinion in Arabic language.
## Data
The model were fine-tuned on ~93000 book reviews in arabic using bert large arabic
Dataset:
- Train 70%
- Validation: 10%
- Test: 20%
## Results
| class | precision | recall | f1-score | Support |
|----------|-----------|--------|----------|---------|
| 0 | 0.9733 | 0.9547 | 0.9639 | 10570 |
| 1 | 0.9555 | 0.9738 | 0.9646 | 10570 |
| Accuracy | | | 0.9642 | 21140 |
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name="mofawzy/Bert-hard-balanced"
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=2)
tokenizer = AutoTokenizer.from_pretrained(model_name)
``` |
mofawzy/arbert-goodreads | 2894639541306e3d28c48d04fbd4adf6e5290606 | 2021-12-05T04:48:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mofawzy | null | mofawzy/arbert-goodreads | 5 | null | transformers | 16,721 | Entry not found |
mohsenfayyaz/bert-base-uncased-offenseval2019-upsample | 672b9c1d188c49e87311e9794acd0b73ae55d634 | 2021-05-19T23:42:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mohsenfayyaz | null | mohsenfayyaz/bert-base-uncased-offenseval2019-upsample | 5 | null | transformers | 16,722 | Entry not found |
mohsenfayyaz/bert-base-uncased-offenseval2019 | 32d9b8221d6bb1cfc14d867b4fa200ff2c0742b8 | 2021-05-19T23:43:36.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mohsenfayyaz | null | mohsenfayyaz/bert-base-uncased-offenseval2019 | 5 | null | transformers | 16,723 | Entry not found |
mohsenfayyaz/roberta-base-toxicity | 60f207a822d82c823c1169164b7404f88d006c49 | 2021-05-20T17:59:08.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | mohsenfayyaz | null | mohsenfayyaz/roberta-base-toxicity | 5 | null | transformers | 16,724 | Entry not found |
monsoon-nlp/byt5-base-dv | a9fee4d5ad49fad2b8f36ac419b19729a53a8e01 | 2021-07-09T23:32:01.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"dv",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | monsoon-nlp | null | monsoon-nlp/byt5-base-dv | 5 | null | transformers | 16,725 | ---
language: dv
---
# byt5-base-dv
Pretrained from scratch on Dhivei (language of the Maldives)
with ByT5, Google's new byte-level tokenizer strategy.
**Use byt5-dv for now; this is less accurate**
Corpus: Sofwath's Dhivehi corpus https://github.com/Sofwath/DhivehiDatasets
Pretraining Notebook:
https://colab.research.google.com/drive/1ERIZ1PyHn-yN_jo7dTQeODn22vrt-d1d?usp=sharing
## Fine-tuning Demo
On Dhivehi news classification task
https://colab.research.google.com/drive/11u5SafR4bKICmArgDl6KQ9vqfYtDpyWp?usp=sharing
## Issues
There was an issue with the vocabulary size, final layer, and/or accuracy on fine-tuning.
|
monsoon-nlp/gpt-nyc-nontoxic | cca94e0cab06fb987a2d3e474d1e8fe686b2eee8 | 2021-08-09T02:04:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | monsoon-nlp | null | monsoon-nlp/gpt-nyc-nontoxic | 5 | null | transformers | 16,726 | # GPT-NYC-nontoxic
## About
GPT2 (small version on HF) fine-tuned on questions and responses from https://reddit.com/r/asknyc
I filtered comments to ones with scores >= 3, and responding directly
to the original post ( = ignoring responses to other commenters).
I also added many tokens which were common on /r/AskNYC but missing from
GPT2.
Additional <Toxic> and <NonToxic> tokens control following output.
Toxic comments (about 5.5% of input data) are those which were flagged
by [Perspective API](https://developers.perspectiveapi.com) with toxicity > 0.7,
or by [English DeHateBERT](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-english),
with <NonToxic> tagging for all comments related to LGBTQ identity
to avoid false positives / more aggressive censorship from these classifiers.
Try prompting with ```question? - additional info %% <Toxic> ```
Or ```question? - additional info %% <NonToxic>```
## Other options
The [gpt-nyc-small](https://huggingface.co/monsoon-nlp/gpt-nyc-small) repo is based
on GPT2 [small] but without the <Toxic> and <NonToxic> tags. It is the most
directly comparable model to this one.
The main [gpt-nyc](https://huggingface.co/monsoon-nlp/gpt-nyc) repo is based
on GPT2-Medium and comes off more accurate. It does not have Toxic/NonToxic tagging.
## Blog
Initial model: https://mapmeld.medium.com/gpt-nyc-part-1-9cb698b2e3d
## Notebooks
### Data processing / new tokens
https://colab.research.google.com/drive/13BOw0uekoAYB4jjQtaXTn6J_VHatiRLu
### Fine-tuning GPT2 (small)
https://colab.research.google.com/drive/1FnXcAh4H-k8dAzixkV5ieygV96ePh3lR
### Predictive text and probabilities
Scroll to end of
https://colab.research.google.com/drive/1FnXcAh4H-k8dAzixkV5ieygV96ePh3lR
to see how to install git-lfs and trick ecco into loading this.
|
moussaKam/frugalscore_medium_deberta_bert-score | e998607b03da2cdd96d96a0670defc2fa89aac51 | 2022-02-01T10:51:45.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"transformers"
] | text-classification | false | moussaKam | null | moussaKam/frugalscore_medium_deberta_bert-score | 5 | null | transformers | 16,727 | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
moussaKam/frugalscore_small_bert-base_bert-score | 73441c494f4ad4c211e78c200c0b3ef7c5e73609 | 2022-02-01T10:50:31.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"transformers"
] | text-classification | false | moussaKam | null | moussaKam/frugalscore_small_bert-base_bert-score | 5 | null | transformers | 16,728 | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
moussaKam/frugalscore_tiny_deberta_bert-score | 267c17bec0ada8743db19f8a50d057ae63af3151 | 2022-02-01T10:51:30.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"transformers"
] | text-classification | false | moussaKam | null | moussaKam/frugalscore_tiny_deberta_bert-score | 5 | null | transformers | 16,729 | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
mrm8488/convbert-small-spanish | 3dd1c2e8c297391c338ec14a3b427056cb14d75b | 2021-07-20T19:11:54.000Z | [
"pytorch",
"tf",
"convbert",
"feature-extraction",
"es",
"dataset:large_spanish_corpus",
"arxiv:2008.02496",
"transformers",
"license:mit"
] | feature-extraction | false | mrm8488 | null | mrm8488/convbert-small-spanish | 5 | 1 | transformers | 16,730 | ---
language: es
datasets:
- large_spanish_corpus
license: mit
---
# ConvBERT small pre-trained on large_spanish_corpus
The ConvBERT architecture is presented in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
## Metrics on evaluation set
```
disc_accuracy = 0.95163906
disc_auc = 0.9405496
disc_loss = 0.13658184
disc_precision = 0.80829453
disc_recall = 0.49316448
global_step = 1000000
loss = 9.12079
masked_lm_accuracy = 0.53505784
masked_lm_loss = 2.3028736
sampled_masked_lm_accuracy = 0.44047198
```
## Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "mrm8488/convbert-small-spanish"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
> Made with <span style="color: #e25555;">♥</span> in Spain |
mrm8488/deberta-v3-small-goemotions | 25552893e88a2ba0918e763a66207f8edd4554ce | 2021-12-28T23:12:12.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/deberta-v3-small-goemotions | 5 | null | transformers | 16,731 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: deberta-v3-snall-goemotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-snall-goemotions
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5638
- F1: 0.4241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.614 | 1.0 | 3082 | 1.5577 | 0.3663 |
| 1.4338 | 2.0 | 6164 | 1.5580 | 0.4084 |
| 1.2936 | 3.0 | 9246 | 1.5006 | 0.4179 |
| 1.1531 | 4.0 | 12328 | 1.5348 | 0.4276 |
| 1.0536 | 5.0 | 15410 | 1.5638 | 0.4241 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mrm8488/distilbert-multi-finedtuned-squad-pt | 9cfb67374c33d73e9abbeaae7e41985ce218642b | 2020-05-23T07:23:36.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/distilbert-multi-finedtuned-squad-pt | 5 | null | transformers | 16,732 | Entry not found |
mrm8488/electricidad-base-finetuned-pawsx-es | bcdcdcd3df46c25a2007f8bb853a9189fcdbc12a | 2021-04-28T15:52:25.000Z | [
"pytorch",
"electra",
"text-classification",
"es",
"dataset:xtreme",
"transformers",
"nli"
] | text-classification | false | mrm8488 | null | mrm8488/electricidad-base-finetuned-pawsx-es | 5 | 1 | transformers | 16,733 | ---
language: es
datasets:
- xtreme
tags:
- nli
widget:
- text: "El río Tabaci es una vertiente del río Leurda en Rumania. El río Leurda es un afluente del río Tabaci en Rumania."
---
# Electricidad-base fine-tuned on PAWS-X-es for Paraphrase Identification (NLI)
|
mrm8488/es-tinybert-v1 | d74022048982a717c651bec56935e653412f60ea | 2021-05-20T00:47:48.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | mrm8488 | null | mrm8488/es-tinybert-v1 | 5 | null | transformers | 16,734 | Entry not found |
mrm8488/gpt2-imdb-neutral | 2424eeb2546306857174670e99b2bc9bc8b37e22 | 2021-08-07T07:15:04.000Z | [
"pytorch",
"gpt2",
"en",
"dataset:imdb",
"transformers",
"GPT-2",
"license:mit"
] | null | false | mrm8488 | null | mrm8488/gpt2-imdb-neutral | 5 | 1 | transformers | 16,735 | ---
language: en
tags:
- GPT-2
datasets:
- imdb
widgets:
- text: "I think the movie was "
license: mit
---
# GPT2-IMDB-neutral (LM + RL) 🎞😐✍
## What is it?
A small GPT2 (`lvwerra/gpt2-imdb`) language model fine-tuned to produce **neutral**-ish movie reviews based on the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). The model is trained with rewards from a BERT sentiment classifier (`lvwerra/gpt2-imdb`) via **PPO**.
## Why?
After reproducing the experiment [lvwerra/gpt2-imdb-pos](https://huggingface.co/lvwerra/gpt2-imdb-pos) but for generating **negative** movie reviews ([mrm8488/gpt2-imdb-neg](https://huggingface.co/mrm8488/gpt2-imdb-neg)) I wanted to check if I could generate neutral-ish movie reviews. So, based on the classifier output (logit), I saw that clearly negative reviews gives around *-4* values and clearly positive reviews around *4*. Then, it was esay to establish an interval ```[-1.75,1.75]``` that it could be considered as **neutral**. So if the classifier output was in that interval I gave it a positive reward while values out of the interval got a negative reward.
## Training setting
The model was trained for `100` optimisation steps with a batch size of `128` which corresponds to `30000` training samples. The full experiment setup (for positive samples) in [trl repo](https://lvwerra.github.io/trl/04-gpt2-sentiment-ppo-training/).
## Examples
A few examples of the model response to a query before and after optimisation:
| query | response (before) | response (after) | rewards (before) | rewards (after) |
|-------|-------------------|------------------|------------------|-----------------|
|Okay, my title is|partly over, but this drama still makes me proud to read its first 40...|weird. The title is "mana were, ahunter". "Man...|4.200727 |-1.891443|
|Where is it written that|there is a monster in this movie anyway? How is it that the entire|[ of the women in the recent women of jungle business between Gender and husband| -3.113942| -1.944993|
|As a lesbian, I|cannot believe I was in the Sixties! Subtle yet witty, with original| found it hard to get responsive. In fact I found myself with the long|\t3.906178|\t0.769166|
|The Derek's have over|three times as many acting hours than Jack Nicholson? You think bitches?|30 dueling characters and kill of, they retreat themselves to their base.|-2.503655| -1.898380|
> All credits to [@lvwerra](https://twitter.com/lvwerra)
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/prunebert-multi-uncased-finepruned-soft-movement-tydiqa-for-xqa | e2b6bdef50c3bc856fcf569e933e34f6b2b2cdd2 | 2020-06-10T17:24:44.000Z | [
"pytorch",
"tensorboard",
"masked_bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/prunebert-multi-uncased-finepruned-soft-movement-tydiqa-for-xqa | 5 | null | transformers | 16,736 | Entry not found |
mrm8488/t5-base-finetuned-disaster-tweets | afd35e68766022e528206b268439bcf7b9834c21 | 2021-06-23T12:44:56.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-disaster-tweets | 5 | null | transformers | 16,737 | Entry not found |
mrp/simcse-model-m-bert-thai-cased | 1a309049e15ee54a2e63ee82f556e7db846c5546 | 2021-10-05T05:48:44.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08821",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | mrp | null | mrp/simcse-model-m-bert-thai-cased | 5 | null | sentence-transformers | 16,738 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {mrp/simcse-model-m-bert-thai-cased}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) by using mBERT as the baseline model and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["ฉันนะคือคนรักชาติยังไงละ!", "พวกสามกีบล้มเจ้า!"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
``` |
msavel-prnt/distilbert-base-uncased-finetuned-clinc | 83a0de4ca19608c9ca7340990581e58855b8fd60 | 2022-01-05T15:37:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | msavel-prnt | null | msavel-prnt/distilbert-base-uncased-finetuned-clinc | 5 | null | transformers | 16,739 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metric:
name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7528
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.3044 | 0.7623 |
| 3.7959 | 2.0 | 636 | 1.8674 | 0.8597 |
| 3.7959 | 3.0 | 954 | 1.1377 | 0.8948 |
| 1.6819 | 4.0 | 1272 | 0.8351 | 0.9126 |
| 0.8804 | 5.0 | 1590 | 0.7528 | 0.9181 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
mujerry/bert-base-uncased-finetuned-QnA | 4680faa1663470dc2c888b54ab495fba443be7eb | 2021-07-27T13:30:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | mujerry | null | mujerry/bert-base-uncased-finetuned-QnA | 5 | null | transformers | 16,740 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bert-base-uncased-finetuned-QnA
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-QnA
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 20 | 3.4894 |
| No log | 2.0 | 40 | 3.5654 |
| No log | 3.0 | 60 | 3.3185 |
| No log | 4.0 | 80 | 3.2859 |
| No log | 5.0 | 100 | 3.2947 |
| No log | 6.0 | 120 | 3.3998 |
| No log | 7.0 | 140 | 3.1642 |
| No log | 8.0 | 160 | 3.2653 |
| No log | 9.0 | 180 | 3.3427 |
| No log | 10.0 | 200 | 3.3549 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
nadzma/finetuned-T5-UK-financial-summarization | af2fcae2f73d40c35e290c650d3e9081552fad23 | 2022-01-17T17:33:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nadzma | null | nadzma/finetuned-T5-UK-financial-summarization | 5 | null | transformers | 16,741 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: finetuned-T5-UK-financial-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-T5-UK-financial-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3276
- Rouge1: 49.5834
- Rouge2: 34.5668
- Rougel: 37.6179
- Rougelsum: 46.0004
- Gen Len: 490.0579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.8064 | 0.67 | 1000 | 0.4376 | 5.8905 | 3.8332 | 5.1982 | 5.5451 | 19.0 |
| 0.486 | 1.34 | 2000 | 0.3717 | 6.3056 | 4.3892 | 5.6589 | 6.0154 | 19.0 |
| 0.4492 | 2.01 | 3000 | 0.3427 | 6.4831 | 4.595 | 5.809 | 6.1753 | 19.0 |
| 0.4138 | 2.68 | 4000 | 0.3362 | 6.4667 | 4.5705 | 5.8081 | 6.1579 | 19.0 |
| 0.3697 | 3.34 | 5000 | 0.3284 | 6.5319 | 4.6032 | 5.8458 | 6.2253 | 19.0 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.7.0
- Datasets 1.17.0
- Tokenizers 0.11.0
|
naram92/distilbert-base-uncased-finetuned-ner | 4cdd420e27abe8cad0fe5da9711ca74326d490e4 | 2021-09-28T16:03:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | naram92 | null | naram92/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,742 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9255649091714665
- name: Recall
type: recall
value: 0.9347801767535519
- name: F1
type: f1
value: 0.9301497189291478
- name: Accuracy
type: accuracy
value: 0.9837164598789457
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0613
- Precision: 0.9256
- Recall: 0.9348
- F1: 0.9301
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2414 | 1.0 | 878 | 0.0702 | 0.9097 | 0.9200 | 0.9148 | 0.9804 |
| 0.0521 | 2.0 | 1756 | 0.0609 | 0.9190 | 0.9327 | 0.9258 | 0.9828 |
| 0.0308 | 3.0 | 2634 | 0.0613 | 0.9256 | 0.9348 | 0.9301 | 0.9837 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
nateraw/resnet50 | 74b8ab5d279c8073d535c0a98b64f9089ff59780 | 2021-04-15T23:19:34.000Z | [
"pytorch",
"resnet",
"dataset:imagenet",
"transformers",
"image-classification"
] | image-classification | false | nateraw | null | nateraw/resnet50 | 5 | null | transformers | 16,743 | ---
tags:
- image-classification
- pytorch
datasets:
- imagenet
---
# Resnet50 Model from Torchvision
## Using the model
```
pip install modelz
```
```python
from modelz import ResnetModel
model = ResnetModel.from_pretrained('nateraw/resnet50')
ex_input = torch.rand(4, 3, 224, 224)
out = model(ex_input)
``` |
ncats/EpiClassify4GARD | 064c51a2f38860013d4d85ce5c3ddaa50f800051 | 2022-02-12T19:10:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:other"
] | text-classification | false | ncats | null | ncats/EpiClassify4GARD | 5 | null | transformers | 16,744 | ---
license: other
---
## Model Documentation in progress
|
ncduy/distilbert-base-uncased-finetuned-ner | c68a1aebb656db6da5a920ae6a2fed30445cc508 | 2021-08-06T15:24:57.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | ncduy | null | ncduy/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,745 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9839547555880344
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9270
- Recall: 0.9377
- F1: 0.9323
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2403 | 1.0 | 878 | 0.0683 | 0.9177 | 0.9215 | 0.9196 | 0.9815 |
| 0.0513 | 2.0 | 1756 | 0.0605 | 0.9227 | 0.9365 | 0.9295 | 0.9836 |
| 0.0298 | 3.0 | 2634 | 0.0612 | 0.9270 | 0.9377 | 0.9323 | 0.9840 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
neuralspace-reverie/indic-transformers-te-bert | 2c6b0ac7cb5d20034ec372cde7699a40a972300f | 2021-05-20T01:37:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"te",
"transformers",
"MaskedLM",
"Telugu",
"BERT",
"Question-Answering",
"Token Classification",
"Text Classification",
"autotrain_compatible"
] | fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-te-bert | 5 | null | transformers | 16,746 | ---
language:
- te
tags:
- MaskedLM
- Telugu
- BERT
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Telugu BERT
## Model description
This is a BERT language model pre-trained on ~1.6 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-te-bert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-te-bert')
text = "మీరు ఎలా ఉన్నారు"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
neuralspace-reverie/indic-transformers-te-xlmroberta | 6ad46592e98e80d6bfb49b738e8c5a94b0bc0ae2 | 2020-12-11T21:57:43.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"te",
"transformers",
"MaskedLM",
"Telugu",
"XLMRoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"autotrain_compatible"
] | fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-te-xlmroberta | 5 | null | transformers | 16,747 | ---
language:
- te
tags:
- MaskedLM
- Telugu
- XLMRoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Telugu XLMRoBERTa
## Model description
This is a XLMRoBERTa language model pre-trained on ~1.6 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-te-xlmroberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-te-xlmroberta')
text = "మీరు ఎలా ఉన్నారు"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
new5558/chula-course-paraphrase-multilingual-mpnet-base-v2 | 028413d7b6315af5caa13882631755fc2e4116e4 | 2021-12-21T21:26:18.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | new5558 | null | new5558/chula-course-paraphrase-multilingual-mpnet-base-v2 | 5 | null | sentence-transformers | 16,748 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# new5558/chula-course-paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('new5558/chula-course-paraphrase-multilingual-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('new5558/chula-course-paraphrase-multilingual-mpnet-base-v2')
model = AutoModel.from_pretrained('new5558/chula-course-paraphrase-multilingual-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=new5558/chula-course-paraphrase-multilingual-mpnet-base-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 49314 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 2000,
"evaluator": "__main__.EmbeddingSimilarityOptimizedEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
niclas/model_en | 577d815d8f07a8c9115fde3f38b369671b201887 | 2021-12-08T09:45:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | niclas | null | niclas/model_en | 5 | null | transformers | 16,749 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: model_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_en
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8610
- Wer: 0.2641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 6.3443 | 3.05 | 250 | 3.0966 | 1.0 |
| 2.9847 | 6.1 | 500 | 3.0603 | 1.0 |
| 2.9263 | 9.15 | 750 | 2.9131 | 1.0 |
| 2.2584 | 12.19 | 1000 | 1.4318 | 0.6575 |
| 1.2603 | 15.24 | 1250 | 1.1964 | 0.4994 |
| 0.9182 | 18.29 | 1500 | 1.1494 | 0.4485 |
| 0.7462 | 21.34 | 1750 | 1.2171 | 0.4357 |
| 0.6129 | 24.39 | 2000 | 1.0557 | 0.3468 |
| 0.5364 | 27.44 | 2250 | 1.1069 | 0.4222 |
| 0.4607 | 30.48 | 2500 | 1.3270 | 0.3370 |
| 0.4139 | 33.53 | 2750 | 1.1814 | 0.3658 |
| 0.3587 | 36.58 | 3000 | 1.2423 | 0.3419 |
| 0.321 | 39.63 | 3250 | 1.2931 | 0.3211 |
| 0.2961 | 42.68 | 3500 | 1.1409 | 0.3315 |
| 0.2635 | 45.73 | 3750 | 1.4537 | 0.3241 |
| 0.2498 | 48.78 | 4000 | 1.2643 | 0.3192 |
| 0.2352 | 51.82 | 4250 | 1.2789 | 0.3278 |
| 0.2193 | 54.87 | 4500 | 1.4220 | 0.3021 |
| 0.2068 | 57.92 | 4750 | 1.3567 | 0.3713 |
| 0.2055 | 60.97 | 5000 | 1.5375 | 0.3051 |
| 0.198 | 64.02 | 5250 | 1.2676 | 0.2782 |
| 0.1835 | 67.07 | 5500 | 1.3905 | 0.2825 |
| 0.1655 | 70.12 | 5750 | 1.7000 | 0.2978 |
| 0.1677 | 73.17 | 6000 | 1.4250 | 0.2812 |
| 0.1522 | 76.22 | 6250 | 1.4220 | 0.2941 |
| 0.1522 | 79.27 | 6500 | 1.5195 | 0.3021 |
| 0.1344 | 82.32 | 6750 | 1.3749 | 0.2996 |
| 0.1298 | 85.36 | 7000 | 1.6663 | 0.2849 |
| 0.1293 | 88.41 | 7250 | 1.4564 | 0.2892 |
| 0.1264 | 91.46 | 7500 | 1.4373 | 0.2935 |
| 0.1243 | 94.51 | 7750 | 1.6572 | 0.2972 |
| 0.1141 | 97.56 | 8000 | 1.4936 | 0.2892 |
| 0.1086 | 100.61 | 8250 | 1.5231 | 0.2868 |
| 0.1056 | 103.65 | 8500 | 1.3733 | 0.2763 |
| 0.098 | 106.7 | 8750 | 1.4887 | 0.2923 |
| 0.0984 | 109.75 | 9000 | 1.3779 | 0.2923 |
| 0.0916 | 112.8 | 9250 | 1.4868 | 0.2604 |
| 0.0881 | 115.85 | 9500 | 1.7991 | 0.2996 |
| 0.0846 | 118.9 | 9750 | 1.5845 | 0.2849 |
| 0.0861 | 121.95 | 10000 | 1.6684 | 0.2794 |
| 0.0806 | 124.99 | 10250 | 1.5774 | 0.3039 |
| 0.0822 | 128.05 | 10500 | 1.5928 | 0.2886 |
| 0.0788 | 131.1 | 10750 | 1.6158 | 0.2880 |
| 0.0704 | 134.15 | 11000 | 1.7679 | 0.2941 |
| 0.0721 | 137.19 | 11250 | 1.7055 | 0.2629 |
| 0.0723 | 140.24 | 11500 | 1.5473 | 0.2653 |
| 0.0676 | 143.29 | 11750 | 1.8963 | 0.2745 |
| 0.0665 | 146.34 | 12000 | 1.6367 | 0.2739 |
| 0.0618 | 149.39 | 12250 | 1.6757 | 0.2745 |
| 0.0595 | 152.44 | 12500 | 1.5900 | 0.2745 |
| 0.056 | 155.48 | 12750 | 1.5362 | 0.2794 |
| 0.0587 | 158.53 | 13000 | 1.4616 | 0.2684 |
| 0.0519 | 161.58 | 13250 | 1.6867 | 0.2549 |
| 0.0569 | 164.63 | 13500 | 1.8294 | 0.2574 |
| 0.0497 | 167.68 | 13750 | 1.7844 | 0.2868 |
| 0.0531 | 170.73 | 14000 | 1.7564 | 0.2770 |
| 0.0489 | 173.78 | 14250 | 1.5811 | 0.2629 |
| 0.0524 | 176.82 | 14500 | 1.6925 | 0.2684 |
| 0.0431 | 179.87 | 14750 | 1.7236 | 0.2653 |
| 0.0457 | 182.92 | 15000 | 1.7460 | 0.2512 |
| 0.045 | 185.97 | 15250 | 1.8096 | 0.2610 |
| 0.0402 | 189.02 | 15500 | 1.8795 | 0.2635 |
| 0.0529 | 192.07 | 15750 | 1.8310 | 0.2616 |
| 0.0396 | 195.12 | 16000 | 1.8380 | 0.2635 |
| 0.0432 | 198.17 | 16250 | 1.8610 | 0.2641 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.13.3
- Tokenizers 0.10.3
|
nikokons/dialo_transfer_5epo | deb39872bd8b7e51d68a29e40f44eb4dcef54669 | 2021-07-27T12:30:17.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | nikokons | null | nikokons/dialo_transfer_5epo | 5 | null | transformers | 16,750 | # A brief description:
This model uses the open sourced-weights of the DIALOGPT (microsoft/DialoGPT-small) and is fine-tuned to the PERSONA-CHAT dataset using an augmented input representation and a multi-task learning scheme, further described in the paper "TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents". The model finetunes quickly to the PERSONA-CHAT dataset and 5 epochs of training was sufficient. A batch size of 4 and accumulated gradients over 8 iterations are used, resulting in the effective batch size of 32. In addition, the Adam optimization scheme with a learning rate of 6e-5 is used. |
nlplab/PhishingEmailGeneration | b51c2958c272f9130f92d7409440d02a879ebb00 | 2022-03-18T08:17:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | nlplab | null | nlplab/PhishingEmailGeneration | 5 | null | transformers | 16,751 | Entry not found |
nlpunibo/roberta | bc51370623c87567d89814bb24b7075a91201063 | 2021-05-20T18:52:48.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/roberta | 5 | null | transformers | 16,752 | Entry not found |
nreimers/BERT-Medium_L-8_H-512_A-8 | 7de90da3e62fde133518c3c343dec9963d5fe754 | 2021-05-28T11:04:38.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | nreimers | null | nreimers/BERT-Medium_L-8_H-512_A-8 | 5 | null | transformers | 16,753 | This is the BERT-Medium model from Google: https://github.com/google-research/bert#bert. A BERT model with 8 layers, 512 hidden unit size, and 8 attention heads. |
nyu-mll/roberta-base-10M-3 | 2264e7f6d648257ff9ca99f02d31d6b7d66ae88a | 2021-05-20T19:00:36.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nyu-mll | null | nyu-mll/roberta-base-10M-3 | 5 | null | transformers | 16,754 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
nyu-mll/roberta-base-1B-3 | 2be1c065ab8ce8d8fa1be97575763088214a855b | 2021-05-20T19:05:43.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nyu-mll | null | nyu-mll/roberta-base-1B-3 | 5 | null | transformers | 16,755 | # RoBERTa Pretrained on Smaller Datasets
We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1.
### Hyperparameters and Validation Perplexity
The hyperparameters and validation perplexities corresponding to each model are as follows:
| Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity |
|--------------------------|---------------|------------|-----------|------------|-----------------------|
| [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 |
| [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 |
| [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 |
| [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 |
| [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 |
| [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 |
| [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 |
| [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 |
| [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 |
| [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 |
| [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 |
| [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 |
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P |
|------------|----|----|-----|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M |
| MED-SMALL | 6 | 8 | 512 | 2048 | 45M |
(AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.)
For other hyperparameters, we select:
- Peak Learning rate: 5e-4
- Warmup Steps: 6% of max steps
- Dropout: 0.1
[link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1
[link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2
[link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3
[link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1
[link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2
[link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3
[link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1
[link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2
[link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3
[link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1
[link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2
[link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
|
oigele/awesome_fb_model | a3c396d52104fe12ab8da78b55811403dc730752 | 2021-11-15T10:18:34.000Z | [
"pytorch",
"bart",
"text-classification",
"dataset:multi_nli",
"transformers",
"zero-shot-classification"
] | zero-shot-classification | false | oigele | null | oigele/awesome_fb_model | 5 | null | transformers | 16,756 | ---
pipeline_tag: zero-shot-classification
datasets:
- multi_nli
widget:
- text: "ETH"
candidate_labels: "Location & Address, Employment, Organizational, Name, Service, Studies, Science"
hypothesis_template: "This is {}."
---
ETH Zeroshot
|
okaemon/fortune | 1fe320b901b68bf98010956c6087709854c99370 | 2021-10-04T08:23:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | okaemon | null | okaemon/fortune | 5 | null | transformers | 16,757 | Entry not found |
orisuchy/Descriptive_Classifier | f1eb0010f9d1923b4e24bd453d13f8be6228de24 | 2022-03-06T13:20:02.000Z | [
"pytorch",
"bert",
"text-classification",
"he",
"dataset:orisuchy/Descriptive_Sentences_He",
"transformers",
"Text Classification",
"license:afl-3.0"
] | text-classification | false | orisuchy | null | orisuchy/Descriptive_Classifier | 5 | 2 | transformers | 16,758 | ---
license: afl-3.0
language: "he"
tags:
- Text Classification
widget:
- text: "היער השחור והגדול"
- text: "ואז הוא הלך לטייל בתוך היער השחור והגדול"
datasets:
- orisuchy/Descriptive_Sentences_He
metrics:
- accuracy
- f1
---
# **Descriptive Sentences Classifier**
Based on [AlephBERT](https://huggingface.co/onlplab/alephbert-base) model.
# **Metrics**
[accuracy](https://huggingface.co/metrics/accuracy): 0.813953488372093
</br>
[f1](https://huggingface.co/metrics/f1): 0.8181818181818182
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='orisuchy/Descriptive_Classifier', return_all_scores=True)
outputs = classifier("מסווג חתיך במיוחד")
print(outputs)
"""
Output:
[[
{'label': 'Descriptive', 'score': 0.999764621257782},
{'label': 'Not Descriptive', 'score': 0.00023541577684227377}]]
"""
```
#### Or, if you want only the final class:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='orisuchy/Descriptive_Classifier')
output = classifier("הלכתי אליו הביתה וחיכיתי")
print(output)
"""
Output:
[{'label': 'Not Descriptive', 'score': 0.999901533126831}]
"""
```
Created by Daniel Smotritsky & Ori Suchy
<br>
[GitHub](https://github.com/orisuchy/miniProject_DHU)
<iframe src="https://wandb.ai/orisuchy/huggingface/reports/Shared-panel-22-03-01-15-03-08--VmlldzoxNjI5MjM0?highlightShare" style="border:none;height:1024px;width:100%">
|
osanseviero/full-sentence-distillroberta3 | a3046011326b6788322c3d39da362acd597dcba3 | 2022-07-01T13:51:38.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"sentence-transformers",
"causal-lm",
"license:cc-by-sa-4.0",
"sentence-similarity"
] | sentence-similarity | false | osanseviero | null | osanseviero/full-sentence-distillroberta3 | 5 | 1 | sentence-transformers | 16,759 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- causal-lm
license:
- cc-by-sa-4.0
---
# TODO: Name of Model
TODO: Description
## Model Description
TODO: Add relevant content
(0) Base Transformer Type: RobertaModel
(1) Pooling mean
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence"]
model = SentenceTransformer(TODO)
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
# The next step is optional if you want your own pooling function.
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
max_over_time = torch.max(token_embeddings, 1)[0]
return max_over_time
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained(TODO)
model = AutoModel.from_pretrained(TODO)
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt'))
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## TODO: Training Procedure
## TODO: Evaluation Results
## TODO: Citing & Authors
|
osunlp/ReasonBERT-BERT-base | cb135c20c96f89f80e44b31f29fcdd227d087ea7 | 2021-09-13T05:42:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | osunlp | null | osunlp/ReasonBERT-BERT-base | 5 | null | transformers | 16,760 | Entry not found |
owen99630/riskdt | b4780eeec4fcb6239a490f3e2eb36f33dd4f8b7c | 2021-09-29T16:45:57.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | owen99630 | null | owen99630/riskdt | 5 | null | transformers | 16,761 | Entry not found |
p208p2002/bart-squad-nqg-hl | 187486625af6462d69b50fcd7738f5cd5758aaf9 | 2021-05-03T03:17:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:squad",
"arxiv:1606.05250",
"arxiv:1705.00106",
"transformers",
"question-generation",
"autotrain_compatible"
] | text2text-generation | false | p208p2002 | null | p208p2002/bart-squad-nqg-hl | 5 | null | transformers | 16,762 | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]."
---
# Transformer QG on SQuAD
HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/)
**This is a Reproduce Version**
More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD)
## Usage
### Input Format
```
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
```
### Input Example
```
Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL].
```
> # Who wrote Harry Potter?
## Data setting
We report two dataset setting as Follow
### SQuAD
- train: 87599\\\\t
- validation: 10570
> [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250)
### SQuAD NQG
- train: 75722
- dev: 10570
- test: 11877
> [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106)
## Available models
- BART
- GPT2
- T5
## Expriments
We report score with `NQG Scorer` which is using in SQuAD NQG.
If not special explanation, the size of the model defaults to "base".
### SQuAD
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
---------------------------------|------|------|------|------|------|-------|
BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 |
GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 |
T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 |
### SQuAD NQG
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
---------------------------------|------|------|------|------|------|-------|
BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 |
BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 |
GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 |
T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 | |
p208p2002/t5-squad-nqg-hl | 3b1a4c128c826caf1a5cba50c736a7ae067f5a48 | 2021-06-23T13:16:20.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"dataset:squad",
"arxiv:1606.05250",
"arxiv:1705.00106",
"transformers",
"question-generation",
"autotrain_compatible"
] | text2text-generation | false | p208p2002 | null | p208p2002/t5-squad-nqg-hl | 5 | null | transformers | 16,763 | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]."
---
# Transformer QG on SQuAD
HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/)
**This is a Reproduce Version**
More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD)
## Usage
### Input Format
```
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
```
### Input Example
```
Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL].
```
> # Who wrote Harry Potter?
## Data setting
We report two dataset setting as Follow
### SQuAD
- train: 87599\\t
- validation: 10570
> [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250)
### SQuAD NQG
- train: 75722
- dev: 10570
- test: 11877
> [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106)
## Available models
- BART
- GPT2
- T5
## Expriments
We report score with `NQG Scorer` which is using in SQuAD NQG.
If not special explanation, the size of the model defaults to "base".
### SQuAD
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
---------------------------------|------|------|------|------|------|-------|
BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 |
GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 |
T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 |
### SQuAD NQG
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
---------------------------------|------|------|------|------|------|-------|
BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 |
BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 |
GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 |
T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 | |
pablouribe/beto-copus-supercategories | 57c4a922e5a960fdf18a5557d95aebb714853cbc | 2022-02-01T07:56:57.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pablouribe | null | pablouribe/beto-copus-supercategories | 5 | null | transformers | 16,764 | Entry not found |
patrickvonplaten/bert-testing | 826d6c54346934868126a2bba2508c1192db0b9d | 2021-05-20T02:17:51.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | patrickvonplaten | null | patrickvonplaten/bert-testing | 5 | null | transformers | 16,765 | Entry not found |
patrickvonplaten/s2t-wav2vec2-large-en-de | 9ff3c047f05ef2fde376450dc270a675674bce01 | 2021-08-25T15:45:42.000Z | [
"pytorch",
"encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/s2t-wav2vec2-large-en-de | 5 | null | transformers | 16,766 | Entry not found |
patrickvonplaten/sew-d-small-100k-ft-timit | 68afae0ff5ac24f0b01458cb7a8a52b98dd231c0 | 2021-10-28T15:26:02.000Z | [
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/sew-d-small-100k-ft-timit | 5 | null | transformers | 16,767 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-ft-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-ft-timit
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7482
- Wer: 0.7987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2068 | 0.69 | 100 | 4.0802 | 1.0 |
| 2.9805 | 1.38 | 200 | 2.9792 | 1.0 |
| 2.9781 | 2.07 | 300 | 2.9408 | 1.0 |
| 2.9655 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8953 | 3.45 | 500 | 2.8775 | 1.0 |
| 2.7719 | 4.14 | 600 | 2.7815 | 0.9999 |
| 2.6531 | 4.83 | 700 | 2.6375 | 1.0065 |
| 2.6425 | 5.52 | 800 | 2.5602 | 1.0210 |
| 2.3963 | 6.21 | 900 | 2.4665 | 1.0591 |
| 2.1447 | 6.9 | 1000 | 2.2792 | 0.9848 |
| 2.2719 | 7.59 | 1100 | 2.2237 | 0.9465 |
| 2.3629 | 8.28 | 1200 | 2.1058 | 0.8907 |
| 2.0913 | 8.97 | 1300 | 2.0113 | 0.9070 |
| 1.8334 | 9.66 | 1400 | 1.9466 | 0.8177 |
| 1.6608 | 10.34 | 1500 | 1.9217 | 0.8698 |
| 2.2194 | 11.03 | 1600 | 1.9091 | 0.8727 |
| 1.9002 | 11.72 | 1700 | 1.8746 | 0.8332 |
| 1.6268 | 12.41 | 1800 | 1.8782 | 0.7951 |
| 1.6455 | 13.1 | 1900 | 1.8230 | 0.8225 |
| 2.0308 | 13.79 | 2000 | 1.8067 | 0.8560 |
| 1.855 | 14.48 | 2100 | 1.8129 | 0.8177 |
| 1.5901 | 15.17 | 2200 | 1.7891 | 0.8367 |
| 1.4848 | 15.86 | 2300 | 1.7821 | 0.8201 |
| 1.8754 | 16.55 | 2400 | 1.7700 | 0.8137 |
| 1.7975 | 17.24 | 2500 | 1.7795 | 0.8171 |
| 1.5194 | 17.93 | 2600 | 1.7605 | 0.7977 |
| 1.4374 | 18.62 | 2700 | 1.7529 | 0.7978 |
| 1.7498 | 19.31 | 2800 | 1.7522 | 0.8023 |
| 1.7452 | 20.0 | 2900 | 1.7482 | 0.7987 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-xls-r-300m-common_voice-tr-ft | fb0ff0413d411d600bf1073d7440141ee1a8449b | 2021-11-14T16:47:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"xls_r_repro_common_voice_tr",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-large-xls-r-300m-common_voice-tr-ft | 5 | null | transformers | 16,768 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- xls_r_repro_common_voice_tr
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4179
- Wer: 0.3071
- Cer: 0.0736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.7638 | 9.09 | 500 | 0.4763 | 0.5313 | 0.1333 |
| 0.5739 | 18.18 | 1000 | 0.4007 | 0.4357 | 0.1099 |
| 0.4343 | 27.27 | 1500 | 0.3819 | 0.4060 | 0.1012 |
| 0.4401 | 36.36 | 2000 | 0.3991 | 0.3954 | 0.1001 |
| 0.2647 | 45.45 | 2500 | 0.3901 | 0.3689 | 0.0914 |
| 0.2656 | 54.55 | 3000 | 0.4284 | 0.3463 | 0.0852 |
| 0.2586 | 63.64 | 3500 | 0.4084 | 0.3297 | 0.0804 |
| 0.2041 | 72.73 | 4000 | 0.3907 | 0.3193 | 0.0781 |
| 0.4265 | 81.82 | 4500 | 0.4265 | 0.3120 | 0.0755 |
| 0.2041 | 90.91 | 5000 | 0.4240 | 0.3071 | 0.0736 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
persiannlp/mt5-base-parsinlu-multiple-choice | 1c4d79ab004e28273e700563172f7244f3fced84 | 2021-09-23T16:19:55.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"multiple-choice",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-base-parsinlu-multiple-choice | 5 | null | transformers | 16,769 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
philschmid/MiniLMv2-L12-H384-emotion | 4f973da95f168987a8418d12436337d1e861a9e8 | 2021-12-06T18:00:12.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | philschmid | null | philschmid/MiniLMv2-L12-H384-emotion | 5 | null | transformers | 16,770 | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-emotion
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2069
- Accuracy: 0.925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8745 | 1.0 | 1000 | 0.6673 | 0.81 |
| 0.3466 | 2.0 | 2000 | 0.2816 | 0.918 |
| 0.2201 | 3.0 | 3000 | 0.2367 | 0.9215 |
| 0.1761 | 4.0 | 4000 | 0.2069 | 0.925 |
| 0.1435 | 5.0 | 5000 | 0.2089 | 0.922 |
| 0.1454 | 6.0 | 6000 | 0.2168 | 0.923 |
| 0.1041 | 7.0 | 7000 | 0.2081 | 0.924 |
| 0.0953 | 8.0 | 8000 | 0.2133 | 0.9245 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
plum/xlm-roberta-large | 7e264a7a3106524d41dd068bc8b08799a034f653 | 2022-01-05T18:19:13.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | plum | null | plum/xlm-roberta-large | 5 | null | transformers | 16,771 | Entry not found |
prajjwal1/ctrl_discovery_flipped_5 | 7f91751af7b03da60025837bdc67dc059163d756 | 2021-04-11T18:28:47.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_flipped_5 | 5 | null | transformers | 16,772 | Entry not found |
pritamdeka/PubMedBert-abstract-cord19-v2 | 988718278f1ce7cb512a07d49cf7b8e935cd3fe2 | 2022-02-07T22:27:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:pritamdeka/cord-19-abstract",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | pritamdeka | null | pritamdeka/PubMedBert-abstract-cord19-v2 | 5 | null | transformers | 16,773 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- pritamdeka/cord-19-abstract
metrics:
- accuracy
model-index:
- name: pubmedbert-abstract-cord19
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: pritamdeka/cord-19-abstract
type: pritamdeka/cord-19-abstract
args: fulltext
metrics:
- name: Accuracy
type: accuracy
value: 0.7246798699728464
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedBert-abstract-cord19-v2
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [pritamdeka/cord-19-abstract](https://huggingface.co/datasets/pritamdeka/cord-19-abstract) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2371
- Accuracy: 0.7247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.27 | 0.53 | 5000 | 1.2425 | 0.7236 |
| 1.2634 | 1.06 | 10000 | 1.3123 | 0.7141 |
| 1.3041 | 1.59 | 15000 | 1.3583 | 0.7072 |
| 1.3829 | 2.12 | 20000 | 1.3590 | 0.7121 |
| 1.3069 | 2.65 | 25000 | 1.3506 | 0.7154 |
| 1.2921 | 3.18 | 30000 | 1.3448 | 0.7160 |
| 1.2731 | 3.7 | 35000 | 1.3375 | 0.7178 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
professional/DialoGPT-small-joshua | 04029f429b9f13d1793b21699274fd8106072493 | 2021-11-06T11:49:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | professional | null | professional/DialoGPT-small-joshua | 5 | 1 | transformers | 16,774 | ---
tags:
- conversational
---
# Joshua DialoGPT model
|
projecte-aina/bart-base-ca | 03ed47f969391f49de367c7de3db89b48a2ebd49 | 2022-07-25T06:49:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"ca",
"dataset:projecte-aina/catalan_textual_corpus",
"arxiv:2202.06871",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | projecte-aina | null | projecte-aina/bart-base-ca | 5 | null | transformers | 16,775 | ---
language: ca
license: apache-2.0
inference: false
datasets:
- projecte-aina/catalan_textual_corpus
---
# BART-Ca: The monolingual Catalan BART
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
BART-ca is a transformer-based language model for the Catalan language and has been trained on a medium-size corpus collected from publicly available corpora and crawlers with the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus).
## Intended Uses and Limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
## How to Use
Here is how to use this model in PyTorch:
```python
from transformers import BartTokenizer, BartModel
tokenizer = BartTokenizer.from_pretrained('projecte-aina/bart-base-ca')
model = BartModel.from_pretrained('projecte-aina/bart-base-ca')
inputs = tokenizer("Hola, el meu gos és molt bonic", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
## Training
### Training Data
As training data, we used the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus), a 1760-million-token web corpus of Catalan built from several sources.
### Training Procedure
#### Tokenization
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) with a vocabulary size of 51,200 tokens.
#### Hyperparameters
The hyperparameters were adapted for [fairseq](https://github.com/facebookresearch/fairseq/blob/main/examples/bart/README.md) from the original BART's paper.
| Hyper-parameter | Value |
|------------------------------------|--------|
| Learning Rate | 5e-4 |
| Learning Rate Decay | Polynomial Decay |
| Warmup Updates | 10000 |
| Batch Size | 2048 |
| Weight Decay | 0.01 |
| Max. Training Updates | 125000 |
## Evaluation
### Variable and Metrics
This model is intended to be fine-tuned for downstream tasks.
### Evaluation Results
This model is intended to be fine-tuned for downstream tasks.
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest preprint:
```bibtex
@misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Funding
This work was funded by MT4All CEF project and the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Contributions
[N/A] |
projecte-aina/roberta-base-ca-cased-te | d888a1b523162a253448bbc955a7882866e28199 | 2022-02-24T08:38:57.000Z | [
"pytorch",
"roberta",
"text-classification",
"ca",
"dataset:projecte-aina/teca",
"arxiv:1907.11692",
"transformers",
"catalan",
"textual entailment",
"teca",
"CaText",
"Catalan Textual Corpus",
"license:apache-2.0",
"model-index"
] | text-classification | false | projecte-aina | null | projecte-aina/roberta-base-ca-cased-te | 5 | null | transformers | 16,776 | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "textual entailment"
- "teca"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/teca"
metrics:
- "accuracy"
model-index:
- name: roberta-base-ca-cased-te
results:
- task:
type: text-classification # Required. Example: automatic-speech-recognition
dataset:
type: projecte-aina/teca
name: teca
metrics:
- type: accuracy
value: 0.7912139892578125
widget:
- text: "M'agrades. T'estimo."
- text: "M'agrada el sol i la calor. A la Garrotxa plou molt."
- text: "El llibre va caure per la finestra. El llibre va sortir volant."
- text: "El meu aniversari és el 23 de maig. Faré anys a finals de maig."
---
# Catalan BERTa (RoBERTa-base) finetuned for Textual Entailment.
The **roberta-base-ca-cased-te** is a Textual Entailment (TE) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).
## Datasets
We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/viquiquad) for training and evaluation.
## Evaluation and results
We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines:
| Model | TECA (accuracy) |
| ------------|:----|
| BERTa | 79.12 |
| mBERT | 74.78 |
| XLM-RoBERTa | 75.44 |
| WikiBERT-ca | x |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Citing
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
|
proycon/robbert-ner-cased-sonar1-nld | ea6556acbd2a41c118f2baa5a56564b286f22367 | 2021-05-20T19:41:07.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | proycon | null | proycon/robbert-ner-cased-sonar1-nld | 5 | null | transformers | 16,777 | Entry not found |
psyche/kobart-paraphrase-generation | 0cfe47f817d312250f1ea3bfc63f7193daa046aa | 2022-01-17T12:44:49.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | psyche | null | psyche/kobart-paraphrase-generation | 5 | null | transformers | 16,778 | Entry not found |
pszemraj/Ballpark-Trivia-XL | b641d710173ee7c882fb47103f28761c59643146 | 2022-06-13T13:05:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:natural questions",
"transformers",
"gpt",
"trivia",
"chatbot",
"license:mit"
] | text-generation | false | pszemraj | null | pszemraj/Ballpark-Trivia-XL | 5 | null | transformers | 16,779 | ---
language:
- en
tags:
- text-generation
- gpt2
- gpt
- trivia
- chatbot
license: mit
datasets:
- natural questions
widget:
- text: "how many ping-pong balls fit inside a standard 747 jet aeroplane?\nperson beta:\n\n"
example_title: "ping-pong"
- text: "What is the capital of Uganda?\nperson beta:\n\n"
example_title: "geography"
- text: "What is the most popular TV show of all time?\nperson beta:\n\n"
example_title: "pseudo-culture"
- text: "A man pushes his car to a hotel and tells the owner he’s bankrupt. Why?\nperson beta:\n\n"
example_title: "brain teaser"
inference:
parameters:
min_length: 2
max_length: 32
no_repeat_ngram_size: 2
do_sample: False
num_beams: 4
early_stopping: True
repetition_penalty: 2.1
---
# Ballpark Trivia: Size XL
**Check out a demo on HF Spaces [here](https://huggingface.co/spaces/pszemraj/ballpark-trivia).**
Are you frequently asked google-able Trivia questions and annoyed by it? Well, this is the model for you! Ballpark Trivia Bot answers any trivia question with something that sounds plausible but is probably not 100% correct. One might say.. the answers are in the right ballpark.
This is by far the largest model trained and should be _more_ credible in its answers or at least able to handle more kinds of questions.
```
what is the temperature of dry ice in kelvin
person beta:
194.65 K
```
## Training
This text gen model is a GPT-2 ~1.5 B Parameter Size XL Model, first trained on [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps (**33**/36 layers frozen for the fine-tuning), and then subsequently trained for 40k steps on a parsed variant of [Natural Questions](https://ai.google.com/research/NaturalQuestions)(then **34**/36 layers frozen for the second fine-tuning) to accidentally create this model.
Note that because the model was originally trained for use in a [chatbot application](https://github.com/pszemraj/ai-msgbot), it uses a named conversation dialogue structure, _i.e. the questions are asked by person alpha, and responded to by person beta_. Even if you don't specify person alpha in the prompt, it hopefully responds to any question.
## Example Prompt
- the default examples are not great
- you can type in any trivia question or delete the example and write `what` or `when` in there, and it will generate the rest of the trivia question **and the answer**!
|
pulp/CHILDES-ParentBERTo | 455d6439bbfb595bae6b2bba666d23d75815f832 | 2021-05-20T19:46:06.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pulp | null | pulp/CHILDES-ParentBERTo | 5 | null | transformers | 16,780 | The language model trained on a fill-mask task with all the North American parent's data in CHILDES.
The parent's data can be found here: https://github.com/xiaomeng-ma/CHILDES
|
q5530793/bert_finetuning_test | 3257e8b492a6fc638f3db03d14fbd4b39b015550 | 2021-05-20T03:40:11.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | q5530793 | null | q5530793/bert_finetuning_test | 5 | null | transformers | 16,781 | Entry not found |
qarib/bert-base-qarib60_1970k | 7b3515c11cd977ccb3066d4ceb111b84d75a8d0e | 2021-05-20T03:46:19.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"ar",
"dataset:arabic_billion_words",
"dataset:open_subtitles",
"dataset:twitter",
"arxiv:2102.10684",
"transformers",
"tf",
"qarib",
"qarib60_1790k",
"autotrain_compatible"
] | fill-mask | false | qarib | null | qarib/bert-base-qarib60_1970k | 5 | null | transformers | 16,782 | ---
language: ar
tags:
- pytorch
- tf
- qarib
- qarib60_1790k
datasets:
- arabic_billion_words
- open_subtitles
- twitter
metrics:
- f1
widget:
- text: " شو عندكم يا [MASK] ."
---
# QARiB: QCRI Arabic and Dialectal BERT
## About QARiB
QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text.
For Tweets, the data was collected using twitter API and using language filter. `lang:ar`. For Text data, it was a combination from
[Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/).
### bert-base-qarib60_1970k
- Data size: 60Gb
- Number of Iterations: 1970k
- Loss: 1.5708898
## Training QARiB
The training of the model has been performed using Google’s original Tensorflow code on Google Cloud TPU v2.
We used a Google Cloud Storage bucket, for persistent storage of training data and models.
See more details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md)
## Using QARiB
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md)
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>>from transformers import pipeline
>>>fill_mask = pipeline("fill-mask", model="./models/data60gb_86k")
>>> fill_mask("شو عندكم يا [MASK]")
[{'sequence': '[CLS] شو عندكم يا عرب [SEP]', 'score': 0.0990147516131401, 'token': 2355, 'token_str': 'عرب'},
{'sequence': '[CLS] شو عندكم يا جماعة [SEP]', 'score': 0.051633741706609726, 'token': 2308, 'token_str': 'جماعة'},
{'sequence': '[CLS] شو عندكم يا شباب [SEP]', 'score': 0.046871256083250046, 'token': 939, 'token_str': 'شباب'},
{'sequence': '[CLS] شو عندكم يا رفاق [SEP]', 'score': 0.03598872944712639, 'token': 7664, 'token_str': 'رفاق'},
{'sequence': '[CLS] شو عندكم يا ناس [SEP]', 'score': 0.031996358186006546, 'token': 271, 'token_str': 'ناس'}]
>>> fill_mask("قللي وشفيييك يرحم [MASK]")
[{'sequence': '[CLS] قللي وشفيييك يرحم والديك [SEP]', 'score': 0.4152909517288208, 'token': 9650, 'token_str': 'والديك'},
{'sequence': '[CLS] قللي وشفيييك يرحملي [SEP]', 'score': 0.07663793861865997, 'token': 294, 'token_str': '##لي'},
{'sequence': '[CLS] قللي وشفيييك يرحم حالك [SEP]', 'score': 0.0453166700899601, 'token': 2663, 'token_str': 'حالك'},
{'sequence': '[CLS] قللي وشفيييك يرحم امك [SEP]', 'score': 0.04390475153923035, 'token': 1942, 'token_str': 'امك'},
{'sequence': '[CLS] قللي وشفيييك يرحمونك [SEP]', 'score': 0.027349254116415977, 'token': 3283, 'token_str': '##ونك'}]
>>> fill_mask("وقام المدير [MASK]")
[
{'sequence': '[CLS] وقام المدير بالعمل [SEP]', 'score': 0.0678194984793663, 'token': 4230, 'token_str': 'بالعمل'},
{'sequence': '[CLS] وقام المدير بذلك [SEP]', 'score': 0.05191086605191231, 'token': 984, 'token_str': 'بذلك'},
{'sequence': '[CLS] وقام المدير بالاتصال [SEP]', 'score': 0.045264165848493576, 'token': 26096, 'token_str': 'بالاتصال'},
{'sequence': '[CLS] وقام المدير بعمله [SEP]', 'score': 0.03732728958129883, 'token': 40486, 'token_str': 'بعمله'},
{'sequence': '[CLS] وقام المدير بالامر [SEP]', 'score': 0.0246378555893898, 'token': 29124, 'token_str': 'بالامر'}
]
>>> fill_mask("وقامت المديرة [MASK]")
[{'sequence': '[CLS] وقامت المديرة بذلك [SEP]', 'score': 0.23992691934108734, 'token': 984, 'token_str': 'بذلك'},
{'sequence': '[CLS] وقامت المديرة بالامر [SEP]', 'score': 0.108805812895298, 'token': 29124, 'token_str': 'بالامر'},
{'sequence': '[CLS] وقامت المديرة بالعمل [SEP]', 'score': 0.06639821827411652, 'token': 4230, 'token_str': 'بالعمل'},
{'sequence': '[CLS] وقامت المديرة بالاتصال [SEP]', 'score': 0.05613093823194504, 'token': 26096, 'token_str': 'بالاتصال'},
{'sequence': '[CLS] وقامت المديرة المديرة [SEP]', 'score': 0.021778125315904617, 'token': 41635, 'token_str': 'المديرة'}]
```
## Training procedure
The training of the model has been performed using Google’s original Tensorflow code on eight core Google Cloud TPU v2.
We used a Google Cloud Storage bucket, for persistent storage of training data and models.
## Eval results
We evaluated QARiB models on five NLP downstream task:
- Sentiment Analysis
- Emotion Detection
- Named-Entity Recognition (NER)
- Offensive Language Detection
- Dialect Identification
The results obtained from QARiB models outperforms multilingual BERT/AraBERT/ArabicBERT.
## Model Weights and Vocab Download
From Huggingface site: https://huggingface.co/qarib/qarib/bert-base-qarib60_1970k
## Contacts
Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih
## Reference
```
@article{abdelali2021pretraining,
title={Pre-Training BERT on Arabic Tweets: Practical Considerations},
author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih},
year={2021},
eprint={2102.10684},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
qarib/bert-base-qarib_far_6500k | 19aae048a31b9cdb1b7323598e7198c3713cedf7 | 2021-04-21T13:41:11.000Z | [
"pytorch",
"ar",
"dataset:arabic_billion_words",
"dataset:open_subtitles",
"dataset:twitter",
"dataset:Farasa",
"arxiv:2102.10684",
"transformers",
"tf",
"QARiB",
"qarib"
] | null | false | qarib | null | qarib/bert-base-qarib_far_6500k | 5 | null | transformers | 16,783 | ---
language: ar
tags:
- pytorch
- tf
- QARiB
- qarib
datasets:
- arabic_billion_words
- open_subtitles
- twitter
- Farasa
metrics:
- f1
widget:
- text: "و+قام ال+مدير [MASK]"
---
# QARiB: QCRI Arabic and Dialectal BERT
## About QARiB Farasa
QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text.
For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from
[Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/).
QARiB: Is the Arabic name for "Boat".
## Model and Parameters:
- Data size: 14B tokens
- Vocabulary: 64k
- Iterations: 10M
- Number of Layers: 12
## Training QARiB
See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md)
## Using QARiB
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md)
This model expects the data to be segmented. You may use [Farasa Segmenter](https://farasa-api.qcri.org/segmentation/) API.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>>from transformers import pipeline
>>>fill_mask = pipeline("fill-mask", model="./models/bert-base-qarib_far")
>>> fill_mask("و+قام ال+مدير [MASK]")
[
]
>>> fill_mask("و+قام+ت ال+مدير+ة [MASK]")
[
]
>>> fill_mask("قللي وشفيييك يرحم [MASK]")
[
]
```
## Evaluations:
|**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**|
|---------------|---------|--------------|--------------|--------------|---------|
|Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** |
|Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** |
|Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% |
|Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** |
|Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% |
## Model Weights and Vocab Download
From Huggingface site: https://huggingface.co/qarib/bert-base-qarib_far
## Contacts
Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih
## Reference
```
@article{abdelali2021pretraining,
title={Pre-Training BERT on Arabic Tweets: Practical Considerations},
author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih},
year={2021},
eprint={2102.10684},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
qingtan007/bert_finetuning_test | 890ad37510584a3707bedeff9c1b2429491bc303 | 2021-05-20T03:50:11.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | qingtan007 | null | qingtan007/bert_finetuning_test | 5 | null | transformers | 16,784 | Entry not found |
ralcanta/do_nothing_bert | 73451420f4cc588b0a319db5c818484bc15bbebe | 2020-11-26T23:38:08.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ralcanta | null | ralcanta/do_nothing_bert | 5 | null | transformers | 16,785 | Entry not found |
ramonzaca/roberto-base-finetuned-pos | 41d1e3f398fdca5a0f85ae0afcb0c74157b31296 | 2021-05-20T19:49:47.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ramonzaca | null | ramonzaca/roberto-base-finetuned-pos | 5 | null | transformers | 16,786 | Entry not found |
ran/h1 | 8f2856dea30f628a8abbc9a572a9fc62397f4727 | 2021-05-20T03:56:49.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ran | null | ran/h1 | 5 | null | transformers | 16,787 | Entry not found |
raruidol/GameANchess | 895fadac8d1799fdef1b49aa5d0616e715b71d85 | 2021-09-16T08:53:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | raruidol | null | raruidol/GameANchess | 5 | null | transformers | 16,788 | Algebraic Notation model of sequences of moves of complete chess games. |
reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lv-ft | a0f7edd3e6daf9616ceb67da1774432b13a8b696 | 2022-03-23T18:34:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"lv",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | reach-vb | null | reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lv-ft | 5 | 1 | transformers | 16,789 | ---
license: apache-2.0
language:
- lv
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-1B-common_voice7-lv-ft
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: lv
metrics:
- name: Test WER
type: wer
value: 11.179
- name: Test CER
type: cer
value: 2.78
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: lv
metrics:
- name: Test WER
type: wer
value: 44.33
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: lv
metrics:
- name: Test WER
type: wer
value: 50.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1B-common_voice7-lv-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Wer: 0.1137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 900
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6292 | 5.26 | 500 | 1.5562 | 0.9263 |
| 0.1303 | 10.53 | 1000 | 0.8107 | 0.7666 |
| 0.0974 | 15.79 | 1500 | 0.5290 | 0.4979 |
| 0.0724 | 21.05 | 2000 | 0.2941 | 0.2247 |
| 0.0591 | 26.32 | 2500 | 0.2838 | 0.2125 |
| 0.0494 | 31.58 | 3000 | 0.2589 | 0.2102 |
| 0.0417 | 36.84 | 3500 | 0.1987 | 0.1760 |
| 0.0375 | 42.11 | 4000 | 0.1934 | 0.1690 |
| 0.031 | 47.37 | 4500 | 0.1630 | 0.1460 |
| 0.027 | 52.63 | 5000 | 0.1957 | 0.1447 |
| 0.0256 | 57.89 | 5500 | 0.1747 | 0.1368 |
| 0.0206 | 63.16 | 6000 | 0.1602 | 0.1299 |
| 0.0178 | 68.42 | 6500 | 0.1809 | 0.1273 |
| 0.0154 | 73.68 | 7000 | 0.1686 | 0.1216 |
| 0.0137 | 78.95 | 7500 | 0.1585 | 0.1241 |
| 0.0128 | 84.21 | 8000 | 0.1783 | 0.1278 |
| 0.011 | 89.47 | 8500 | 0.1653 | 0.1228 |
| 0.0096 | 94.74 | 9000 | 0.1620 | 0.1161 |
| 0.0091 | 100.0 | 9500 | 0.1582 | 0.1137 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
recobo/chemical-bert-uncased-simcse | 501ebc46786e9cef5c13d8290ed50530ff708161 | 2021-09-06T05:52:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | recobo | null | recobo/chemical-bert-uncased-simcse | 5 | null | sentence-transformers | 16,790 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# recobo/chemical-bert-uncased-simcse
```python
from sentence_transformers import SentenceTransformer
model_name = 'recobo/chemical-bert-uncased-simcse'
model = SentenceTransformer(model_name)
``` |
recobo/chemical-bert-uncased-tsdae | e6702c55ba838e9e4e25666e643ccba4e2b3c6db | 2021-09-04T21:17:19.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | recobo | null | recobo/chemical-bert-uncased-tsdae | 5 | null | sentence-transformers | 16,791 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# recobo/chemical-bert-uncased-tsdae
```python
from sentence_transformers import SentenceTransformer
model_name = 'recobo/chemical-bert-uncased-tsdae'
model = SentenceTransformer(model_name)
``` |
rjbownes/lovelace-evaluator | a544951d6eacd1cde02fd2ad637e745073eb2b0d | 2021-05-20T04:28:03.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | rjbownes | null | rjbownes/lovelace-evaluator | 5 | null | transformers | 16,792 | Entry not found |
rodrigogelacio/autonlp-department-classification-534915130 | fba721a44c8c100f23019f83eda66219df0f9c0c | 2022-01-28T02:06:52.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:rodrigogelacio/autonlp-data-department-classification",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | rodrigogelacio | null | rodrigogelacio/autonlp-department-classification-534915130 | 5 | 1 | transformers | 16,793 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- rodrigogelacio/autonlp-data-department-classification
co2_eq_emissions: 1.4862856774320061
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 534915130
- CO2 Emissions (in grams): 1.4862856774320061
## Validation Metrics
- Loss: 0.37066277861595154
- Accuracy: 0.9204545454545454
- Macro F1: 0.9103715740678612
- Micro F1: 0.9204545454545455
- Weighted F1: 0.9196871607509906
- Macro Precision: 0.9207759152612094
- Micro Precision: 0.9204545454545454
- Weighted Precision: 0.922177301864802
- Macro Recall: 0.9055002187355129
- Micro Recall: 0.9204545454545454
- Weighted Recall: 0.9204545454545454
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/rodrigogelacio/autonlp-department-classification-534915130
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rodrigogelacio/autonlp-department-classification-534915130", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rodrigogelacio/autonlp-department-classification-534915130", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
saattrupdan/verdict-classifier-en | 4cba74d525c83c3536834efc40b1e2f4a686656a | 2021-10-27T14:58:17.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"en",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | saattrupdan | null | saattrupdan/verdict-classifier-en | 5 | null | transformers | 16,794 | ---
license: mit
language: en
tags:
- generated_from_trainer
model-index:
- name: verdict-classifier-en
results:
- task:
type: text-classification
name: Verdict Classification
widget:
- "Even though it might look true, it has been taken out of context."
---
# English Verdict Classifier
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on 2,500 deduplicated verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into English with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/).
It achieves the following results on the evaluation set, being 1,000 such verdicts translated into English, but here including duplicates to represent the true distribution:
- Loss: 0.1290
- F1 Macro: 0.9171
- F1 Misinformation: 0.9896
- F1 Factual: 0.9890
- F1 Other: 0.7727
- Precision Macro: 0.8940
- Precision Misinformation: 0.9954
- Precision Factual: 0.9783
- Precision Other: 0.7083
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2500
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Precision Macro | Precision Misinformation | Precision Factual | Precision Other |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
| 1.1493 | 0.16 | 50 | 1.1040 | 0.0550 | 0.0 | 0.1650 | 0.0 | 0.0300 | 0.0 | 0.0899 | 0.0 |
| 1.0899 | 0.32 | 100 | 1.0765 | 0.0619 | 0.0203 | 0.1654 | 0.0 | 0.2301 | 0.6 | 0.0903 | 0.0 |
| 1.0136 | 0.48 | 150 | 1.0487 | 0.3102 | 0.9306 | 0.0 | 0.0 | 0.2900 | 0.8701 | 0.0 | 0.0 |
| 0.9868 | 0.64 | 200 | 1.0221 | 0.3102 | 0.9306 | 0.0 | 0.0 | 0.2900 | 0.8701 | 0.0 | 0.0 |
| 0.9599 | 0.8 | 250 | 0.9801 | 0.3102 | 0.9306 | 0.0 | 0.0 | 0.2900 | 0.8701 | 0.0 | 0.0 |
| 0.9554 | 0.96 | 300 | 0.9500 | 0.3102 | 0.9306 | 0.0 | 0.0 | 0.2900 | 0.8701 | 0.0 | 0.0 |
| 0.935 | 1.12 | 350 | 0.9071 | 0.3102 | 0.9306 | 0.0 | 0.0 | 0.2900 | 0.8701 | 0.0 | 0.0 |
| 0.948 | 1.28 | 400 | 0.8809 | 0.3102 | 0.9306 | 0.0 | 0.0 | 0.2900 | 0.8701 | 0.0 | 0.0 |
| 0.9344 | 1.44 | 450 | 0.8258 | 0.3102 | 0.9306 | 0.0 | 0.0 | 0.2900 | 0.8701 | 0.0 | 0.0 |
| 0.9182 | 1.6 | 500 | 0.7687 | 0.3102 | 0.9306 | 0.0 | 0.0 | 0.2900 | 0.8701 | 0.0 | 0.0 |
| 0.8942 | 1.76 | 550 | 0.5787 | 0.3102 | 0.9306 | 0.0 | 0.0 | 0.2900 | 0.8701 | 0.0 | 0.0 |
| 0.8932 | 1.92 | 600 | 0.4506 | 0.4043 | 0.9628 | 0.0 | 0.25 | 0.3777 | 0.9753 | 0.0 | 0.1579 |
| 0.7448 | 2.08 | 650 | 0.2884 | 0.5323 | 0.9650 | 0.3303 | 0.3017 | 0.7075 | 0.9810 | 0.9474 | 0.1942 |
| 0.6616 | 2.24 | 700 | 0.2162 | 0.8161 | 0.9710 | 0.9724 | 0.5051 | 0.7910 | 0.9824 | 0.9670 | 0.4237 |
| 0.575 | 2.4 | 750 | 0.1754 | 0.8305 | 0.9714 | 0.9780 | 0.5421 | 0.7961 | 0.9881 | 0.9674 | 0.4328 |
| 0.5246 | 2.56 | 800 | 0.1641 | 0.8102 | 0.9659 | 0.9175 | 0.5472 | 0.7614 | 0.9892 | 0.8558 | 0.4394 |
| 0.481 | 2.72 | 850 | 0.1399 | 0.8407 | 0.9756 | 0.9780 | 0.5686 | 0.8082 | 0.9894 | 0.9674 | 0.4677 |
| 0.4588 | 2.88 | 900 | 0.1212 | 0.8501 | 0.9786 | 0.9783 | 0.5934 | 0.8247 | 0.9871 | 0.9574 | 0.5294 |
| 0.4512 | 3.04 | 950 | 0.1388 | 0.8270 | 0.9702 | 0.9836 | 0.5273 | 0.7904 | 0.9893 | 0.9677 | 0.4143 |
| 0.3894 | 3.2 | 1000 | 0.1270 | 0.8411 | 0.9737 | 0.9836 | 0.5660 | 0.8043 | 0.9905 | 0.9677 | 0.4545 |
| 0.3772 | 3.36 | 1050 | 0.1267 | 0.8336 | 0.9732 | 0.9890 | 0.5385 | 0.8013 | 0.9882 | 0.9783 | 0.4375 |
| 0.3528 | 3.52 | 1100 | 0.1073 | 0.8546 | 0.9791 | 0.9890 | 0.5957 | 0.8284 | 0.9883 | 0.9783 | 0.5185 |
| 0.3694 | 3.68 | 1150 | 0.1120 | 0.8431 | 0.9786 | 0.9890 | 0.5618 | 0.8244 | 0.9849 | 0.9783 | 0.5102 |
| 0.3146 | 3.84 | 1200 | 0.1189 | 0.8325 | 0.9738 | 0.9836 | 0.54 | 0.8016 | 0.9870 | 0.9677 | 0.45 |
| 0.3038 | 4.01 | 1250 | 0.1041 | 0.8648 | 0.9815 | 0.9836 | 0.6292 | 0.8425 | 0.9884 | 0.9677 | 0.5714 |
| 0.2482 | 4.17 | 1300 | 0.1245 | 0.8588 | 0.9773 | 0.9836 | 0.6154 | 0.8202 | 0.9929 | 0.9677 | 0.5 |
| 0.2388 | 4.33 | 1350 | 0.1167 | 0.8701 | 0.9808 | 0.9836 | 0.6458 | 0.8377 | 0.9918 | 0.9677 | 0.5536 |
| 0.2593 | 4.49 | 1400 | 0.1215 | 0.8654 | 0.9790 | 0.9836 | 0.6337 | 0.8284 | 0.9929 | 0.9677 | 0.5246 |
| 0.239 | 4.65 | 1450 | 0.1057 | 0.8621 | 0.9803 | 0.9890 | 0.6170 | 0.8349 | 0.9895 | 0.9783 | 0.5370 |
| 0.2397 | 4.81 | 1500 | 0.1256 | 0.8544 | 0.9761 | 0.9890 | 0.5981 | 0.8162 | 0.9929 | 0.9783 | 0.4776 |
| 0.2238 | 4.97 | 1550 | 0.1189 | 0.8701 | 0.9802 | 0.9836 | 0.6465 | 0.8343 | 0.9929 | 0.9677 | 0.5424 |
| 0.1811 | 5.13 | 1600 | 0.1456 | 0.8438 | 0.9737 | 0.9836 | 0.5741 | 0.8051 | 0.9917 | 0.9677 | 0.4559 |
| 0.1615 | 5.29 | 1650 | 0.1076 | 0.8780 | 0.9838 | 0.9836 | 0.6667 | 0.8581 | 0.9895 | 0.9677 | 0.6170 |
| 0.1783 | 5.45 | 1700 | 0.1217 | 0.8869 | 0.9831 | 0.9836 | 0.6939 | 0.8497 | 0.9953 | 0.9677 | 0.5862 |
| 0.1615 | 5.61 | 1750 | 0.1305 | 0.8770 | 0.9808 | 0.9836 | 0.6667 | 0.8371 | 0.9953 | 0.9677 | 0.5484 |
| 0.155 | 5.77 | 1800 | 0.1218 | 0.8668 | 0.9821 | 0.9890 | 0.6292 | 0.8460 | 0.9884 | 0.9783 | 0.5714 |
| 0.167 | 5.93 | 1850 | 0.1091 | 0.8991 | 0.9873 | 0.9890 | 0.7209 | 0.8814 | 0.9919 | 0.9783 | 0.6739 |
| 0.1455 | 6.09 | 1900 | 0.1338 | 0.8535 | 0.9773 | 0.9890 | 0.5941 | 0.8202 | 0.9906 | 0.9783 | 0.4918 |
| 0.1301 | 6.25 | 1950 | 0.1321 | 0.8792 | 0.9820 | 0.9890 | 0.6667 | 0.8439 | 0.9941 | 0.9783 | 0.5593 |
| 0.1049 | 6.41 | 2000 | 0.1181 | 0.9031 | 0.9879 | 0.9834 | 0.7381 | 0.8911 | 0.9908 | 0.9780 | 0.7045 |
| 0.1403 | 6.57 | 2050 | 0.1432 | 0.8608 | 0.9779 | 0.9890 | 0.6154 | 0.8237 | 0.9929 | 0.9783 | 0.5 |
| 0.1178 | 6.73 | 2100 | 0.1443 | 0.8937 | 0.9844 | 0.9945 | 0.7021 | 0.8644 | 0.9930 | 0.9890 | 0.6111 |
| 0.1267 | 6.89 | 2150 | 0.1346 | 0.8494 | 0.9786 | 0.9890 | 0.5806 | 0.8249 | 0.9871 | 0.9783 | 0.5094 |
| 0.1043 | 7.05 | 2200 | 0.1494 | 0.8905 | 0.9832 | 0.9945 | 0.6939 | 0.8564 | 0.9941 | 0.9890 | 0.5862 |
| 0.0886 | 7.21 | 2250 | 0.1180 | 0.8946 | 0.9873 | 0.9890 | 0.7073 | 0.8861 | 0.9896 | 0.9783 | 0.6905 |
| 0.1183 | 7.37 | 2300 | 0.1777 | 0.8720 | 0.9790 | 0.9890 | 0.6481 | 0.8298 | 0.9964 | 0.9783 | 0.5147 |
| 0.0813 | 7.53 | 2350 | 0.1405 | 0.8912 | 0.9856 | 0.9836 | 0.7045 | 0.8685 | 0.9919 | 0.9677 | 0.6458 |
| 0.111 | 7.69 | 2400 | 0.1379 | 0.8874 | 0.9838 | 0.9836 | 0.6947 | 0.8540 | 0.9941 | 0.9677 | 0.6 |
| 0.1199 | 7.85 | 2450 | 0.1301 | 0.9080 | 0.9879 | 0.9890 | 0.7473 | 0.8801 | 0.9953 | 0.9783 | 0.6667 |
| 0.1054 | 8.01 | 2500 | 0.1478 | 0.8845 | 0.9838 | 0.9890 | 0.6809 | 0.8546 | 0.9930 | 0.9783 | 0.5926 |
| 0.105 | 8.17 | 2550 | 0.1333 | 0.9021 | 0.9879 | 0.9890 | 0.7294 | 0.8863 | 0.9919 | 0.9783 | 0.6889 |
| 0.09 | 8.33 | 2600 | 0.1555 | 0.8926 | 0.9855 | 0.9890 | 0.7033 | 0.8662 | 0.9930 | 0.9783 | 0.6275 |
| 0.0947 | 8.49 | 2650 | 0.1572 | 0.8831 | 0.9856 | 0.9890 | 0.6747 | 0.8726 | 0.9885 | 0.9783 | 0.6512 |
| 0.0784 | 8.65 | 2700 | 0.1477 | 0.8969 | 0.9873 | 0.9890 | 0.7143 | 0.8836 | 0.9908 | 0.9783 | 0.6818 |
| 0.0814 | 8.81 | 2750 | 0.1700 | 0.8932 | 0.9861 | 0.9890 | 0.7045 | 0.8720 | 0.9919 | 0.9783 | 0.6458 |
| 0.0962 | 8.97 | 2800 | 0.1290 | 0.9171 | 0.9896 | 0.9890 | 0.7727 | 0.8940 | 0.9954 | 0.9783 | 0.7083 |
| 0.0802 | 9.13 | 2850 | 0.1721 | 0.8796 | 0.9832 | 0.9890 | 0.6667 | 0.8517 | 0.9918 | 0.9783 | 0.5849 |
| 0.0844 | 9.29 | 2900 | 0.1516 | 0.9023 | 0.9867 | 0.9890 | 0.7312 | 0.8717 | 0.9953 | 0.9783 | 0.6415 |
| 0.0511 | 9.45 | 2950 | 0.1544 | 0.9062 | 0.9879 | 0.9890 | 0.7416 | 0.8820 | 0.9942 | 0.9783 | 0.6735 |
| 0.0751 | 9.61 | 3000 | 0.1748 | 0.8884 | 0.9832 | 0.9945 | 0.6875 | 0.8571 | 0.9930 | 0.9890 | 0.5893 |
| 0.0707 | 9.77 | 3050 | 0.1743 | 0.8721 | 0.9802 | 0.9890 | 0.6471 | 0.8349 | 0.9941 | 0.9783 | 0.5323 |
| 0.0951 | 9.93 | 3100 | 0.1660 | 0.8899 | 0.9850 | 0.9890 | 0.6957 | 0.8622 | 0.9930 | 0.9783 | 0.6154 |
| 0.0576 | 10.1 | 3150 | 0.2029 | 0.8613 | 0.9766 | 0.9890 | 0.6182 | 0.8197 | 0.9952 | 0.9783 | 0.4857 |
| 0.0727 | 10.26 | 3200 | 0.1709 | 0.8920 | 0.9849 | 0.9890 | 0.7021 | 0.8612 | 0.9942 | 0.9783 | 0.6111 |
| 0.0654 | 10.42 | 3250 | 0.1599 | 0.8999 | 0.9861 | 0.9945 | 0.7191 | 0.8780 | 0.9919 | 0.9890 | 0.6531 |
| 0.0553 | 10.58 | 3300 | 0.2091 | 0.8920 | 0.9849 | 0.9890 | 0.7021 | 0.8612 | 0.9942 | 0.9783 | 0.6111 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.2 |
sammy786/wav2vec2-xlsr-czech | 7a6989ffbaa3495a1e8ca86761c76849752a1cbb | 2022-03-23T18:26:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"cs",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-czech | 5 | null | transformers | 16,795 | ---
language:
- cs
license: apache-2.0
tags:
- automatic-speech-recognition
- cs
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-czech
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: cs
metrics:
- name: Test WER
type: wer
value: 11.22
- name: Test CER
type: cer
value: 2.52
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: cs
metrics:
- name: Test WER
type: wer
value: 97.02
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: cs
metrics:
- name: Test WER
type: wer
value: 69.7
---
# sammy786/wav2vec2-xlsr-czech
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - cs dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 7.26
- Wer: 19.32
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv, invalidated.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 6.654600 | 3.329486 | 1.000000 |
| 400 | 1.700600 | 0.317266 | 0.409446 |
| 600 | 0.767400 | 0.211371 | 0.313981 |
| 800 | 0.718600 | 0.167771 | 0.280676 |
| 1000 | 0.661700 | 0.142229 | 0.258938 |
| 1200 | 0.594400 | 0.137321 | 0.256275 |
| 1400 | 0.583900 | 0.132922 | 0.248418 |
| 1600 | 0.565100 | 0.117214 | 0.238640 |
| 1800 | 0.369600 | 0.116954 | 0.238291 |
| 2000 | 0.292800 | 0.109973 | 0.227509 |
| 2200 | 0.255400 | 0.104955 | 0.228120 |
| 2400 | 0.266800 | 0.097268 | 0.220525 |
| 2600 | 0.232700 | 0.096055 | 0.213584 |
| 2800 | 0.213700 | 0.097770 | 0.218866 |
| 3000 | 0.209900 | 0.091633 | 0.210485 |
| 3200 | 0.196800 | 0.090342 | 0.208739 |
| 3400 | 0.200500 | 0.082326 | 0.204767 |
| 3600 | 0.176800 | 0.085491 | 0.204068 |
| 3800 | 0.170000 | 0.081289 | 0.201231 |
| 4000 | 0.166200 | 0.080762 | 0.200227 |
| 4200 | 0.161700 | 0.076671 | 0.198001 |
| 4400 | 0.147000 | 0.077383 | 0.196997 |
| 4600 | 0.141900 | 0.076057 | 0.195862 |
| 4800 | 0.144800 | 0.074612 | 0.195120 |
| 5000 | 0.138900 | 0.073138 | 0.193985 |
| 5200 | 0.143900 | 0.072802 | 0.192894 |
| 5400 | 0.131100 | 0.072764 | 0.193723 |
| 5600 | 0.137000 | 0.072697 | 0.193679 |
| 5800 | 0.133300 | 0.072651 | 0.193286 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-czech --dataset mozilla-foundation/common_voice_8_0 --config cs --split test
``` |
sammy786/wav2vec2-xlsr-romansh_vallader | c1ce827735fa788b654e65e61a255dd77929f928 | 2022-03-23T18:33:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"rm-vallader",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-romansh_vallader | 5 | null | transformers | 16,796 | ---
language:
- rm-vallader
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- rm-vallader
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-romansh_vallader
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: rm-vallader
metrics:
- name: Test WER
type: wer
value: 28.54
- name: Test CER
type: cer
value: 6.57
---
# sammy786/wav2vec2-xlsr-romansh_vallader
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - rm-vallader dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 30.31
- Wer: 26.32
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 5.895100 | 3.136624 | 0.999713 |
| 400 | 1.545700 | 0.445069 | 0.471584 |
| 600 | 0.693900 | 0.340700 | 0.363088 |
| 800 | 0.510600 | 0.295432 | 0.289610 |
| 1000 | 0.318800 | 0.286795 | 0.281860 |
| 1200 | 0.194000 | 0.307468 | 0.274110 |
| 1400 | 0.151800 | 0.304849 | 0.264351 |
| 1600 | 0.148300 | 0.303112 | 0.263203 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-romansh_vallader --dataset mozilla-foundation/common_voice_8_0 --config rm-vallader --split test
``` |
saraks/cuad-distil-governing_law-cased-08-31-v1 | d412346a84032e0ae906af9afee60d8bf88ce45c | 2021-08-31T17:13:43.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-governing_law-cased-08-31-v1 | 5 | null | transformers | 16,797 | Entry not found |
savasy/TurkQP | 80057516d7be8bf90773797841e84a1e9c12e887 | 2021-05-20T04:52:43.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | savasy | null | savasy/TurkQP | 5 | null | transformers | 16,798 | Entry not found |
sberbank-ai/ruclip-vit-large-patch14-224 | 5814c526671a20e620fd9be930780675e34711ef | 2022-01-09T21:43:58.000Z | [
"pytorch",
"transformers"
] | null | false | sberbank-ai | null | sberbank-ai/ruclip-vit-large-patch14-224 | 5 | null | transformers | 16,799 | # ruclip-vit-large-patch14-224
**RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model
for obtaining images and text similarities and rearranging captions and pictures.
RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and
multimodal learning.
Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
* Task: `text ranking`; `image ranking`; `zero-shot image classification`;
* Type: `encoder`
* Num Parameters: `430M`
* Training Data Volume: `240 million text-image pairs`
* Language: `Russian`
* Context Length: `77`
* Transformer Layers: `12`
* Transformer Width: `768`
* Transformer Heads: `12`
* Image Size: `224`
* Vision Layers: `24`
* Vision Width: `1024`
* Vision Patch Size: `14`
## Usage [Github](https://github.com/sberbank-ai/ru-clip)
```
pip install ruclip
```
```python
clip, processor = ruclip.load("ruclip-vit-large-patch14-224", device="cuda")
```
## Performance
We have evaluated the performance on the following datasets:
| Dataset | Metric Name | Metric Result |
|:--------------|:---------------|:--------------------|
| Food101 | acc | 0.597 |
| CIFAR10 | acc | 0.878 |
| CIFAR100 | acc | 0.511 |
| Birdsnap | acc | 0.172 |
| SUN397 | acc | 0.484 |
| Stanford Cars | acc | 0.559 |
| DTD | acc | 0.370 |
| MNIST | acc | 0.337 |
| STL10 | acc | 0.934 |
| PCam | acc | 0.520 |
| CLEVR | acc | 0.152 |
| Rendered SST2 | acc | 0.529 |
| ImageNet | acc | 0.426 |
| FGVC Aircraft | mean-per-class | 0.046 |
| Oxford Pets | mean-per-class | 0.604 |
| Caltech101 | mean-per-class | 0.777 |
| Flowers102 | mean-per-class | 0.455 |
| HatefulMemes | roc-auc | 0.530 |
# Authors
+ Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
+ Daniil Chesakov: [Github](https://github.com/Danyache)
+ Denis Dimitrov: [Github](https://github.com/denndimitrov)
+ Igor Pavlov: [Github](https://github.com/boomb0om)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.