modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sebastiaan/sentence-BERT-classification | b6349a65bbdc3a3a1b9eaa77507087cee92a5c89 | 2021-12-17T12:11:26.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | sebastiaan | null | sebastiaan/sentence-BERT-classification | 9 | null | transformers | 12,400 | Entry not found |
seduerr/pai_simplifier_abstract | d734051cc275cbc3a0b953df53118c34828eb65b | 2021-03-03T11:54:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | seduerr | null | seduerr/pai_simplifier_abstract | 9 | null | transformers | 12,401 | Entry not found |
seduerr/t5_base_paws_ger | 479c2ffa2005bf83e16926b3e356881fd3c0bb11 | 2020-11-30T11:17:06.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | seduerr | null | seduerr/t5_base_paws_ger | 9 | null | transformers | 12,402 | # T5 Base with Paraphrases in German Language
This T5 base model has been trained with the German part of the PAWS-X data set.
It can be used as any T5 model and will generated paraphrases with the prompt keyword: 'paraphrase: '__GermanSentence__
Please contact me, if you need more information ([email protected]).
Thank you.
Sebastian
|
sentence-transformers/bert-large-nli-cls-token | 50322e8fd4324f6c43ac0834fff7b7d89603282e | 2022-06-16T00:52:43.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | false | sentence-transformers | null | sentence-transformers/bert-large-nli-cls-token | 9 | null | sentence-transformers | 12,403 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/bert-large-nli-cls-token
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/bert-large-nli-cls-token')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-large-nli-cls-token')
model = AutoModel.from_pretrained('sentence-transformers/bert-large-nli-cls-token')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-large-nli-cls-token)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
seongju/squadv2-xlm-roberta-base | ee73e040823929114f15ec776399ad5012d9aea3 | 2021-07-30T07:45:30.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | seongju | null | seongju/squadv2-xlm-roberta-base | 9 | null | transformers | 12,404 | ### Model information
* language : English
* fine tuning data : [squad 2.0](https://rajpurkar.github.io/SQuAD-explorer/)
* License : CC-BY-SA 4.0
* Base model : [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
* input : question, context
* output : answer
----
### Train information
* train_runtime : 7562.859
* train_steps_per_second : 1.077
* training_loss : 0.9661213896603117
* epoch: 3.0
----
### How to use
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained (
"seongju/squadv2-xlm-roberta-base"
)
model = AutoModelForSequenceClassification.from_pretrained (
"seongju/squadv2-xlm-roberta-base"
)
``` |
sergunow/movie-chat | 1e1d0465433eb5c4b8cf6d023c8e1ea4bd04aab8 | 2021-07-14T19:46:21.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"en",
"dataset:rick_and_morty",
"transformers",
"conversational",
"license:apache-2.0",
"autotrain_compatible"
]
| conversational | false | sergunow | null | sergunow/movie-chat | 9 | null | transformers | 12,405 | ---
language:
- en
thumbnail:
tags:
- conversational
license: apache-2.0
datasets:
- rick_and_morty
metrics:
- perplexity
---
## Model description
Fine-tuning facebook/blenderbot-400M-distill on subtitles rick and morty |
sgugger/bert-fine-tuned-cola | 6cc73c7161211f475a5389d50ac1a98865c01326 | 2021-11-01T19:28:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | sgugger | null | sgugger/bert-fine-tuned-cola | 9 | null | transformers | 12,406 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5959186748524787
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8068
- Matthews Correlation: 0.5959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4838 | 1.0 | 1069 | 0.5996 | 0.4637 |
| 0.3543 | 2.0 | 2138 | 0.6670 | 0.5778 |
| 0.1948 | 3.0 | 3207 | 0.8068 | 0.5959 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.13.4.dev0
- Tokenizers 0.10.3
|
sgugger/glue-mrpc | fa7096bd42f308a2470336184502d44c1f4caf20 | 2021-12-06T21:36:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | sgugger | null | sgugger/glue-mrpc | 9 | null | transformers | 12,407 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: glue-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8553921568627451
- name: F1
type: f1
value: 0.897391304347826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6566
- Accuracy: 0.8554
- F1: 0.8974
- Combined Score: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
shahukareem/wav2vec2-xls-r-1b-dv | 7fe8d35f60ba47fff512f7497714b603f908d767 | 2022-02-11T08:15:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dv",
"robust-speech-event",
"model_for_talk",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | shahukareem | null | shahukareem/wav2vec2-xls-r-1b-dv | 9 | null | transformers | 12,408 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- dv
- robust-speech-event
- model_for_talk
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 21.32
- name: Test CER
type: cer
value: 3.43
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-dv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1702
- Wer: 0.2123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.8412 | 0.66 | 400 | 0.7160 | 0.7913 |
| 0.6832 | 1.33 | 800 | 0.3401 | 0.5268 |
| 0.4624 | 1.99 | 1200 | 0.2671 | 0.4683 |
| 0.3832 | 2.65 | 1600 | 0.2395 | 0.4410 |
| 0.3443 | 3.32 | 2000 | 0.2410 | 0.4296 |
| 0.324 | 3.98 | 2400 | 0.2302 | 0.4143 |
| 0.2934 | 4.64 | 2800 | 0.2402 | 0.4136 |
| 0.2773 | 5.31 | 3200 | 0.2134 | 0.4088 |
| 0.2638 | 5.97 | 3600 | 0.2072 | 0.4037 |
| 0.2479 | 6.63 | 4000 | 0.2036 | 0.3876 |
| 0.2424 | 7.3 | 4400 | 0.2037 | 0.3767 |
| 0.2249 | 7.96 | 4800 | 0.1959 | 0.3802 |
| 0.2169 | 8.62 | 5200 | 0.1943 | 0.3813 |
| 0.2109 | 9.29 | 5600 | 0.1944 | 0.3691 |
| 0.1991 | 9.95 | 6000 | 0.1870 | 0.3589 |
| 0.1917 | 10.61 | 6400 | 0.1834 | 0.3485 |
| 0.1862 | 11.28 | 6800 | 0.1857 | 0.3486 |
| 0.1744 | 11.94 | 7200 | 0.1812 | 0.3330 |
| 0.171 | 12.6 | 7600 | 0.1797 | 0.3436 |
| 0.1599 | 13.27 | 8000 | 0.1839 | 0.3319 |
| 0.1597 | 13.93 | 8400 | 0.1737 | 0.3385 |
| 0.1494 | 14.59 | 8800 | 0.1807 | 0.3239 |
| 0.1444 | 15.26 | 9200 | 0.1750 | 0.3155 |
| 0.1382 | 15.92 | 9600 | 0.1705 | 0.3084 |
| 0.1299 | 16.58 | 10000 | 0.1777 | 0.2999 |
| 0.1306 | 17.25 | 10400 | 0.1765 | 0.3056 |
| 0.1239 | 17.91 | 10800 | 0.1676 | 0.2864 |
| 0.1149 | 18.57 | 11200 | 0.1774 | 0.2861 |
| 0.1134 | 19.24 | 11600 | 0.1654 | 0.2699 |
| 0.1101 | 19.9 | 12000 | 0.1621 | 0.2651 |
| 0.1038 | 20.56 | 12400 | 0.1686 | 0.2610 |
| 0.1038 | 21.23 | 12800 | 0.1722 | 0.2559 |
| 0.0988 | 21.89 | 13200 | 0.1708 | 0.2486 |
| 0.0949 | 22.55 | 13600 | 0.1696 | 0.2453 |
| 0.0913 | 23.22 | 14000 | 0.1677 | 0.2424 |
| 0.0879 | 23.88 | 14400 | 0.1640 | 0.2359 |
| 0.0888 | 24.54 | 14800 | 0.1697 | 0.2347 |
| 0.0826 | 25.21 | 15200 | 0.1709 | 0.2314 |
| 0.0819 | 25.87 | 15600 | 0.1679 | 0.2256 |
| 0.0793 | 26.53 | 16000 | 0.1701 | 0.2214 |
| 0.0773 | 27.2 | 16400 | 0.1682 | 0.2176 |
| 0.0783 | 27.86 | 16800 | 0.1685 | 0.2165 |
| 0.074 | 28.52 | 17200 | 0.1688 | 0.2155 |
| 0.0753 | 29.19 | 17600 | 0.1695 | 0.2110 |
| 0.0699 | 29.85 | 18000 | 0.1702 | 0.2123 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
shreeshaaithal/Discord-AI-bot | 07f548018e7a6a631e6e91f0b949f3fd05041889 | 2021-07-05T07:34:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | shreeshaaithal | null | shreeshaaithal/Discord-AI-bot | 9 | null | transformers | 12,409 | ---
tags:
- conversational
---
# My Awesome Model |
singjing/chineses-bert-albert | 6911a4084e56c108d90751770fcf819bbf8a2990 | 2021-12-10T23:43:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | singjing | null | singjing/chineses-bert-albert | 9 | null | transformers | 12,410 | Entry not found |
smeylan/childes-bert | 7e67c592ea19dbb8d3568531081196976607368e | 2021-05-20T06:49:00.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"en",
"dataset:childes",
"transformers",
"language-modeling",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | smeylan | null | smeylan/childes-bert | 9 | null | transformers | 12,411 | ---
language: "en"
tags:
- language-modeling
license: "cc-by-sa-4.0"
datasets:
- childes
--- |
speech-seq2seq/wav2vec2-2-bart-large-no-adapter-frozen-enc | ebabff2549480596801f51da416198bd06a6259b | 2022-02-22T01:08:44.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | speech-seq2seq | null | speech-seq2seq/wav2vec2-2-bart-large-no-adapter-frozen-enc | 9 | null | transformers | 12,412 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 18.7898
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.5396 | 0.28 | 500 | 9.0401 | 1.0120 |
| 5.898 | 0.56 | 1000 | 9.3199 | 1.0 |
| 4.9595 | 0.84 | 1500 | 8.4434 | 1.4563 |
| 5.7082 | 1.12 | 2000 | 15.1805 | 1.0000 |
| 5.4377 | 1.4 | 2500 | 15.7984 | 1.0021 |
| 5.5941 | 1.68 | 3000 | 18.4928 | 1.0 |
| 5.0662 | 1.96 | 3500 | 17.4886 | 1.0000 |
| 4.8363 | 2.24 | 4000 | 18.9458 | 1.0 |
| 4.7908 | 2.52 | 4500 | 18.2794 | 1.0006 |
| 4.679 | 2.8 | 5000 | 18.7898 | 1.0 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
speech-seq2seq/wav2vec2-2-roberta-large | c561c40369e8a973a2161fd918c12b2c35cbce53 | 2022-02-10T06:14:17.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | speech-seq2seq | null | speech-seq2seq/wav2vec2-2-roberta-large | 9 | null | transformers | 12,413 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 12.2365
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 6.5774 | 0.28 | 500 | 10.5449 | 1.0 |
| 6.706 | 0.56 | 1000 | 9.4411 | 1.0 |
| 6.9182 | 0.84 | 1500 | 10.9554 | 1.0 |
| 6.7416 | 1.12 | 2000 | 10.0801 | 1.0 |
| 6.8778 | 1.4 | 2500 | 9.8569 | 1.0 |
| 6.7694 | 1.68 | 3000 | 10.4234 | 1.0 |
| 6.7415 | 1.96 | 3500 | 10.6545 | 1.0 |
| 6.5997 | 2.24 | 4000 | 10.4268 | 1.0 |
| 6.7672 | 2.52 | 4500 | 11.1929 | 1.0 |
| 6.5254 | 2.8 | 5000 | 12.2365 | 1.0 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
textattack/facebook-bart-base-glue-RTE | e30d40913f3c87fbfd8ebe94c74ac3cf6b7abd5f | 2020-08-20T15:49:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | textattack | null | textattack/facebook-bart-base-glue-RTE | 9 | null | transformers | 12,414 | ## TextAttack Model Cardrate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7256317689530686, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
thu-coai/CDial-GPT_LCCC-base | b4ca401f3d24cd18c933458c8db2d917e17a3531 | 2020-12-23T06:47:44.000Z | [
"pytorch",
"transformers"
]
| null | false | thu-coai | null | thu-coai/CDial-GPT_LCCC-base | 9 | 1 | transformers | 12,415 | # CDial-GPT_LCCC-base
https://github.com/thu-coai/CDial-GPT |
tk3879110/bert_cn_finetuning | 13b6aa954138d1d1b65265514c3c3299cf0c0c09 | 2021-05-20T07:51:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | tk3879110 | null | tk3879110/bert_cn_finetuning | 9 | null | transformers | 12,416 | Entry not found |
trueto/medalbert-base-wwm-chinese | 280fc3688033ee9b7fd7f5ff4a576f8b2488dc75 | 2021-03-26T05:33:51.000Z | [
"pytorch",
"albert",
"transformers"
]
| null | false | trueto | null | trueto/medalbert-base-wwm-chinese | 9 | null | transformers | 12,417 | # [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
``` |
truongphan/vntourismNER | 60f42abf2eab510b017ce18cdac49ebdc24a582c | 2021-09-19T10:36:08.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | truongphan | null | truongphan/vntourismNER | 9 | null | transformers | 12,418 | # Vietnam Tourism Named Entity Recognition (English version)
We fine-tuned BERT to train Vietnam tourism dataset for a question answering system. The model was called NER2QUES because it detected tourism NER in a sentence. From that, the system generated questions corresponding to NER types.
# How to use
## You can use in Transformers
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("truongphan/vntourismNER")
model = AutoModelForTokenClassification.from_pretrained("truongphan/vntourismNER")
custom_labels = [
"O", "B-TA", "I-TA", "B-PRO", "I-PRO", "B-TEM", "I-TEM", "B-COM", "I-COM", "B-PAR", "I-PAR", "B-CIT", "I-CIT",
"B-MOU", "I-MOU", "B-HAM", "I-HAM", "B-AWA", "I-AWA", "B-VIS", "I-VIS", "B-FES", "I-FES", "B-ISL", "I-ISL",
"B-TOW", "I-TOW", "B-VIL", "I-VIL", "B-CHU", "I-CHU", "B-PAG", "I-PAG", "B-BEA", "I-BEA", "B-WAR", "I-WAR",
"B-WAT", "I-WAT", "B-SA", "I-SA", "B-SER", "I-SER", "B-STR", "I-STR", "B-NUN", "I-NUN", "B-PAL", "I-PAL",
"B-VOL", "I-VOL", "B-HIL", "I-HIL", "B-MAR", "I-MAR", "B-VAL", "I-VAL", "B-PROD", "I-PROD", "B-DIS", "I-DIS",
"B-FOO", "I-FOO", "B-DISH", "I-DISH", "B-DRI", "I-DRI"
]
line = "King Garden is located in Thanh Thuy, Phu Tho province"
nlp = pipeline('ner', model=model, tokenizer=tokenizer)
ner_rs = nlp(line)
for k in ner_rs:
print(custom_labels[int(str(k['entity']).replace('LABEL_',''))], '-', k['word'])
# Authors
1. Phuc Do, University of Information Technology, Ho Chi Minh national university, Vietnam
Email: <[email protected]>
Link *[Google scholar](https://scholar.google.com/citations?user=qv1WUzcAAAAJ&hl=vi)*
2. Truong H. V. Phan, Van Lang university, Ho Chi Minh city, Vietnam
Email: <[email protected]>
Link *[Google scholar](https://scholar.google.com/citations?hl=vi&user=cDexuHEAAAAJ)*
# Citation
If you use the model in your work, please cite our paper
Phan, T.H.V., Do, P. NER2QUES: combining named entity recognition and sequence to sequence to automatically generating Vietnamese questions. Neural Comput & Applic (2021). https://doi.org/10.1007/s00521-021-06477-7 |
tugstugi/bert-base-mongolian-uncased | 1a930e57ca53fe4e35a530b4d59f4662dfd77618 | 2021-05-20T08:13:09.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"mn",
"arxiv:1810.04805",
"transformers",
"mongolian",
"uncased",
"autotrain_compatible"
]
| fill-mask | false | tugstugi | null | tugstugi/bert-base-mongolian-uncased | 9 | null | transformers | 12,419 | ---
language: "mn"
tags:
- bert
- mongolian
- uncased
---
# BERT-BASE-MONGOLIAN-UNCASED
[Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert)
## Model description
This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu).
Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs.
This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/),
[huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese).
#### How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-base-mongolian-uncased', use_fast=False)
model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-base-mongolian-uncased')
## declare task ##
pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
## example ##
input_ = 'Миний [MASK] хоол идэх нь тун чухал.'
output_ = pipe(input_)
for i in range(len(output_)):
print(output_[i])
## output ##
#{'sequence': 'миний хувьд хоол идэх нь тун чухал.', 'score': 0.7889143824577332, 'token': 126, 'token_str': 'хувьд'}
#{'sequence': 'миний бодлоор хоол идэх нь тун чухал.', 'score': 0.18616807460784912, 'token': 6106, 'token_str': 'бодлоор'}
#{'sequence': 'миний зүгээс хоол идэх нь тун чухал.', 'score': 0.004825591575354338, 'token': 761, 'token_str': 'зүгээс'}
#{'sequence': 'миний биед хоол идэх нь тун чухал.', 'score': 0.0015743684489279985, 'token': 3010, 'token_str': 'биед'}
#{'sequence': 'миний тухайд хоол идэх нь тун чухал.', 'score': 0.0014919431414455175, 'token': 1712, 'token_str': 'тухайд'}
```
## Training data
Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)]
### BibTeX entry and citation info
```bibtex
@misc{mongolian-bert,
author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold},
title = {BERT Pretrained Models on Mongolian Datasets},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}}
}
```
|
tugstugi/bert-large-mongolian-cased | 0536bffe97464e7afd102e4145bc1359753a5f19 | 2021-05-20T08:16:24.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"mn",
"arxiv:1810.04805",
"transformers",
"mongolian",
"cased",
"autotrain_compatible"
]
| fill-mask | false | tugstugi | null | tugstugi/bert-large-mongolian-cased | 9 | null | transformers | 12,420 | ---
language: "mn"
tags:
- bert
- mongolian
- cased
---
# BERT-LARGE-MONGOLIAN-CASED
[Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert)
## Model description
This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu).
Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs.
This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/),
[huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese).
#### How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-large-mongolian-cased', use_fast=False)
model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-large-mongolian-cased')
## declare task ##
pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
## example ##
input_ = 'Монгол улсын [MASK] Улаанбаатар хотоос ярьж байна.'
output_ = pipe(input_)
for i in range(len(output_)):
print(output_[i])
## output ##
# {'sequence': 'Монгол улсын нийслэл Улаанбаатар хотоос ярьж байна.', 'score': 0.9779232740402222, 'token': 1176, 'token_str': 'нийслэл'}
# {'sequence': 'Монгол улсын Нийслэл Улаанбаатар хотоос ярьж байна.', 'score': 0.015034765936434269, 'token': 4059, 'token_str': 'Нийслэл'}
# {'sequence': 'Монгол улсын Ерөнхийлөгч Улаанбаатар хотоос ярьж байна.', 'score': 0.0021413620561361313, 'token': 325, 'token_str': 'Ерөнхийлөгч'}
# {'sequence': 'Монгол улсын ерөнхийлөгч Улаанбаатар хотоос ярьж байна.', 'score': 0.0008035294013097882, 'token': 1215, 'token_str': 'ерөнхийлөгч'}
# {'sequence': 'Монгол улсын нийслэлийн Улаанбаатар хотоос ярьж байна.', 'score': 0.0006434018723666668, 'token': 356, 'token_str': 'нийслэлийн'}
```
## Training data
Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)]
### BibTeX entry and citation info
```bibtex
@misc{mongolian-bert,
author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold},
title = {BERT Pretrained Models on Mongolian Datasets},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}}
}
```
|
tugstugi/wav2vec2-large-xlsr-53-mongolian | c9f41223fbca3096565900c2c79a5a23cc793dfe | 2021-03-22T07:19:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mn",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | tugstugi | null | tugstugi/wav2vec2-large-xlsr-53-mongolian | 9 | null | transformers | 12,421 | ---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Tugstugi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 42.80
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-53-mongolian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 42.80 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ??? |
uclanlp/plbart-multi_task-interpreted | b30d3b2ffd0180b6e54e67104593b04f305c2dc0 | 2022-03-02T07:43:33.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-interpreted | 9 | null | transformers | 12,422 | Entry not found |
vasilis/wav2vec2-large-xlsr-53-finnish | bd3411af3e3e7040fc0ffaae329394e562b368e0 | 2021-03-29T02:30:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | vasilis | null | vasilis/wav2vec2-large-xlsr-53-finnish | 9 | null | transformers | 12,423 | ---
language: fi
datasets:
- common_voice
- CSS10 finnish: Single Speaker Speech Dataset
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: V XLSR Wav2Vec2 Large 53 - finnish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 38.335242
- name: Test CER
type: cer
value: 6.552408
---
# Wav2Vec2-Large-XLSR-53-finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on finnish using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10 finnish: Single Speaker Speech Dataset](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the finnish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
replacements = {"…": "", "–": ''}
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for key, value in replacements.items():
batch["sentence"] = batch["sentence"].replace(key, value)
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 38.335242 %
## Training
The Common Voice train dataset was used for training. Also all of `CSS10 Finnish` was used using the normalized transcripts.
After 20000 steps the models was finetuned using the common voice train and validation sets for 2000 steps more.
|
versae/byt5-base-finetuned-modernisa | 31d854cd8e4606060500a7f99c4b484cb0f5ea38 | 2022-07-20T10:25:00.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:versae/modernisa",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | versae | null | versae/byt5-base-finetuned-modernisa | 9 | 1 | transformers | 12,424 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
datasets:
- versae/modernisa
model-index:
- name: byt5-base-finetuned-modernisa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-base-finetuned-modernisa
This model is a fine-tuned version of [google/byt5-base](https://huggingface.co/google/byt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1176
- Bleu: 44.888
- Gen Len: 18.4465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.1474 | 0.35 | 10000 | 0.1360 | 42.8789 | 18.4441 |
| 0.1328 | 0.71 | 20000 | 0.1303 | 43.5394 | 18.4368 |
| 0.1216 | 1.06 | 30000 | 0.1245 | 44.1557 | 18.4384 |
| 0.1167 | 1.42 | 40000 | 0.1219 | 44.1961 | 18.4449 |
| 0.1065 | 1.77 | 50000 | 0.1192 | 44.7353 | 18.443 |
| 0.099 | 2.13 | 60000 | 0.1195 | 44.522 | 18.4524 |
| 0.088 | 2.48 | 70000 | 0.1192 | 44.8243 | 18.4441 |
| 0.0907 | 2.84 | 80000 | 0.1176 | 44.888 | 18.4465 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
voidful/bart-base-chinese | e076b7963878bee120655f794c6d4fbb2ff03595 | 2022-02-08T17:09:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | voidful | null | voidful/bart-base-chinese | 9 | null | transformers | 12,425 | Entry not found |
w11wo/javanese-gpt2-small | 7b9c46c1fbf15388c6278d545d04c739e65d3c87 | 2022-02-14T16:19:46.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"jv",
"dataset:wikipedia",
"transformers",
"javanese-gpt2-small",
"license:mit"
]
| text-generation | false | w11wo | null | w11wo/javanese-gpt2-small | 9 | null | transformers | 12,426 | ---
language: jv
tags:
- javanese-gpt2-small
license: mit
datasets:
- wikipedia
widget:
- text: "Jenengku Budi, saka Indonesia"
---
## Javanese GPT-2 Small
Javanese GPT-2 Small is a language model based on the [GPT-2 model](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). It was trained on the latest (late December 2020) Javanese Wikipedia articles.
The model was originally HuggingFace's pretrained [English GPT-2 model](https://huggingface.co/transformers/model_doc/gpt2.html) and is later fine-tuned on the Javanese dataset. Many of the techniques used
are based on a [notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb)/[blog](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) shared by [Pierre Guillou](https://medium.com/@pierre_guillou), where Pierre Guillou fine-tuned the English GPT-2 model on a Portuguese dataset.
Frameworks used include HuggingFace's [Transformers](https://huggingface.co/transformers) and fast.ai's [Deep Learning library](https://docs.fast.ai/). PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training /Validation data (text) |
|-----------------------|---------|-------------|-------------------------------------|
| `javanese-gpt2-small` | 124M | GPT-2 Small | Javanese Wikipedia (319 MB of text) |
## Evaluation Results
Before fine-tuning, the English GPT-2 model went through a validation step just to see how the model fairs prior to training.
| valid loss | perplexity |
|------------|------------|
| 10.845 | 51313.62 |
The model was then trained afterwards for 5 epochs and the following are the results.
| epoch | train loss | valid loss | perplexity | total time |
|-------|------------|------------|------------|------------|
| 0 | 4.336 | 4.110 | 60.94 | 22:28 |
| 1 | 3.598 | 3.543 | 34.58 | 23:27 |
| 2 | 3.161 | 3.331 | 27.98 | 24:17 |
| 3 | 2.974 | 3.265 | 26.18 | 25:03 |
| 4 | 2.932 | 3.234 | 25.39 | 25:06 |
## How to Use (PyTorch)
### Load Model and Byte-level Tokenizer
```python
from transformers import GPT2TokenizerFast, GPT2LMHeadModel
pretrained_name = "w11wo/javanese-gpt2-small"
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
tokenizer.model_max_length = 1024
model = GPT2LMHeadModel.from_pretrained(pretrained_name)
```
### Generate a Sequence
```python
# sample prompt
prompt = "Jenengku Budi, saka Indonesia"
input_ids = tokenizer.encode(prompt, return_tensors='pt')
model.eval()
# generate output using top-k sampling
sample_outputs = model.generate(input_ids,
pad_token_id=50256,
do_sample=True,
max_length=40,
min_length=40,
top_k=40,
num_return_sequences=1)
for i, sample_output in enumerate(sample_outputs):
print(tokenizer.decode(sample_output.tolist()))
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Credits
Major thanks to Pierre Guillou for sharing his work, which did not only enable me to realize this project but also taught me tons of new, exciting stuff.
## Author
Javanese GPT-2 Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
wietsedv/wav2vec2-large-xlsr-53-dutch | be3c679adfb9b7ceda54f0791768bd900d2fa374 | 2021-03-28T18:23:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | wietsedv | null | wietsedv/wav2vec2-large-xlsr-53-dutch | 9 | null | transformers | 12,427 | ---
language: nl
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Dutch XLSR Wav2Vec2 Large 53 by Wietse de Vries
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice nl
type: common_voice
args: nl
metrics:
- name: Test WER
type: wer
value: 17.09
---
# Wav2Vec2-Large-XLSR-53-Dutch
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dutch using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "nl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-dutch")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-dutch")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Dutch test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "nl", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-dutch")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-dutch")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\'\“\%\‘\”]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.09 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
x-tech/mt5-translate-zh-yue | cd3b461c1ae89cf4d73797b274ea60fcaf31c4c9 | 2022-06-04T09:33:42.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"zh",
"yue",
"dataset:x-tech/cantonese-mandarin-translations",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | x-tech | null | x-tech/mt5-translate-zh-yue | 9 | null | transformers | 12,428 | ---
language:
- zh
- yue
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: output
results: []
datasets:
- x-tech/cantonese-mandarin-translations
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on dataset [x-tech/cantonese-mandarin-translations](https://huggingface.co/datasets/x-tech/cantonese-mandarin-translations).
## Model description
The model translates Mandarin sentences to Cantonese.
## Intended uses & limitations
When you use the model, please make sure to add `translate mandarin to cantonese: <sentence>` (please note the space after colon) before the text you want to translate.
## Training and evaluation data
Training Dataset: [x-tech/cantonese-mandarin-translations](https://huggingface.co/datasets/x-tech/cantonese-mandarin-translations)
## Training procedure
Training is based on [example in transformers library](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
Since we still need to set up validation set, we do not have any training results yet.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.8.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
xiongjie/face-expression-ja | f2c431ddbf25259b731cadfcbb76a343295e24db | 2021-11-07T16:27:35.000Z | [
"pytorch",
"onnx",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | xiongjie | null | xiongjie/face-expression-ja | 9 | null | transformers | 12,429 | Entry not found |
xkang/bert-finetuned-ner | e8a4617bde4bb08cdb58c8fdfec8daf956dc000c | 2021-12-21T07:16:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | xkang | null | xkang/bert-finetuned-ner | 9 | null | transformers | 12,430 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9392329403951519
- name: Recall
type: recall
value: 0.9520363513968361
- name: F1
type: f1
value: 0.9455913079816131
- name: Accuracy
type: accuracy
value: 0.9864308000235474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0634
- Precision: 0.9392
- Recall: 0.9520
- F1: 0.9456
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0866 | 1.0 | 1756 | 0.0736 | 0.9157 | 0.9322 | 0.9239 | 0.9816 |
| 0.0382 | 2.0 | 3512 | 0.0663 | 0.9326 | 0.9472 | 0.9398 | 0.9855 |
| 0.0226 | 3.0 | 5268 | 0.0634 | 0.9392 | 0.9520 | 0.9456 | 0.9864 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
xysmalobia/sequence_classification | 1ad5020216d5b7bf0728d59e3a305357add13e45 | 2021-11-14T11:57:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | xysmalobia | null | xysmalobia/sequence_classification | 9 | null | transformers | 12,431 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: sequence_classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8529411764705882
- name: F1
type: f1
value: 0.8943661971830987
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sequence_classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7738
- Accuracy: 0.8529
- F1: 0.8944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.3519 | 0.8627 | 0.9 |
| 0.4872 | 2.0 | 918 | 0.6387 | 0.8333 | 0.8893 |
| 0.2488 | 3.0 | 1377 | 0.7738 | 0.8529 | 0.8944 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
yabramuvdi/bert-sector | 5948066259efdae305d6dba9e3474dc78e8b67c9 | 2022-02-11T14:58:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | yabramuvdi | null | yabramuvdi/bert-sector | 9 | null | transformers | 12,432 | Entry not found |
yair/SummaryGeneration-sagemaker3 | a8b092c3bcd072d742b53b67ddc54b173f3ecd4e | 2021-05-19T01:16:23.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | yair | null | yair/SummaryGeneration-sagemaker3 | 9 | 1 | transformers | 12,433 |
---
language: en
tags:
- sagemaker
- bart
- summarization
license: apache-2.0
- Training 3000 examples
|
yihahn/ner_2006 | 8341ae6359666ed1e199f6a6532e3ddf675f6672 | 2021-12-19T15:58:26.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | yihahn | null | yihahn/ner_2006 | 9 | null | transformers | 12,434 | Entry not found |
zhuqing/distilbert-uncased-exp3-feminist | 747f2b1f512c9592354a16d7def5b3268fa02802 | 2021-08-29T06:45:13.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | zhuqing | null | zhuqing/distilbert-uncased-exp3-feminist | 9 | null | transformers | 12,435 | Entry not found |
wietsedv/xlm-roberta-base-ft-udpos28-cs | 037bc5e2cf7ae190ec4e76d698461370291e500b | 2022-02-25T09:58:09.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"cs",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-cs | 9 | null | transformers | 12,436 |
---
language:
- cs
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-cs
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 83.4
- type: accuracy
name: Dutch Test accuracy
value: 83.9
- type: accuracy
name: German Test accuracy
value: 83.2
- type: accuracy
name: Italian Test accuracy
value: 81.5
- type: accuracy
name: French Test accuracy
value: 83.5
- type: accuracy
name: Spanish Test accuracy
value: 85.9
- type: accuracy
name: Russian Test accuracy
value: 91.2
- type: accuracy
name: Swedish Test accuracy
value: 88.3
- type: accuracy
name: Norwegian Test accuracy
value: 79.6
- type: accuracy
name: Danish Test accuracy
value: 85.4
- type: accuracy
name: Low Saxon Test accuracy
value: 55.4
- type: accuracy
name: Akkadian Test accuracy
value: 40.3
- type: accuracy
name: Armenian Test accuracy
value: 84.2
- type: accuracy
name: Welsh Test accuracy
value: 66.4
- type: accuracy
name: Old East Slavic Test accuracy
value: 76.6
- type: accuracy
name: Albanian Test accuracy
value: 76.8
- type: accuracy
name: Slovenian Test accuracy
value: 87.4
- type: accuracy
name: Guajajara Test accuracy
value: 37.3
- type: accuracy
name: Kurmanji Test accuracy
value: 79.3
- type: accuracy
name: Turkish Test accuracy
value: 77.5
- type: accuracy
name: Finnish Test accuracy
value: 83.3
- type: accuracy
name: Indonesian Test accuracy
value: 83.0
- type: accuracy
name: Ukrainian Test accuracy
value: 92.8
- type: accuracy
name: Polish Test accuracy
value: 92.1
- type: accuracy
name: Portuguese Test accuracy
value: 86.3
- type: accuracy
name: Kazakh Test accuracy
value: 80.2
- type: accuracy
name: Latin Test accuracy
value: 79.7
- type: accuracy
name: Old French Test accuracy
value: 59.4
- type: accuracy
name: Buryat Test accuracy
value: 60.3
- type: accuracy
name: Kaapor Test accuracy
value: 22.5
- type: accuracy
name: Korean Test accuracy
value: 58.9
- type: accuracy
name: Estonian Test accuracy
value: 85.7
- type: accuracy
name: Croatian Test accuracy
value: 94.9
- type: accuracy
name: Gothic Test accuracy
value: 27.2
- type: accuracy
name: Swiss German Test accuracy
value: 48.8
- type: accuracy
name: Assyrian Test accuracy
value: 15.0
- type: accuracy
name: North Sami Test accuracy
value: 43.4
- type: accuracy
name: Naija Test accuracy
value: 41.6
- type: accuracy
name: Latvian Test accuracy
value: 85.9
- type: accuracy
name: Chinese Test accuracy
value: 31.9
- type: accuracy
name: Tagalog Test accuracy
value: 72.0
- type: accuracy
name: Bambara Test accuracy
value: 27.8
- type: accuracy
name: Lithuanian Test accuracy
value: 86.5
- type: accuracy
name: Galician Test accuracy
value: 85.1
- type: accuracy
name: Vietnamese Test accuracy
value: 67.4
- type: accuracy
name: Greek Test accuracy
value: 84.2
- type: accuracy
name: Catalan Test accuracy
value: 84.6
- type: accuracy
name: Czech Test accuracy
value: 98.4
- type: accuracy
name: Erzya Test accuracy
value: 51.1
- type: accuracy
name: Bhojpuri Test accuracy
value: 48.7
- type: accuracy
name: Thai Test accuracy
value: 52.4
- type: accuracy
name: Marathi Test accuracy
value: 87.7
- type: accuracy
name: Basque Test accuracy
value: 74.0
- type: accuracy
name: Slovak Test accuracy
value: 95.8
- type: accuracy
name: Kiche Test accuracy
value: 36.8
- type: accuracy
name: Yoruba Test accuracy
value: 28.3
- type: accuracy
name: Warlpiri Test accuracy
value: 43.3
- type: accuracy
name: Tamil Test accuracy
value: 84.3
- type: accuracy
name: Maltese Test accuracy
value: 33.1
- type: accuracy
name: Ancient Greek Test accuracy
value: 59.5
- type: accuracy
name: Icelandic Test accuracy
value: 79.1
- type: accuracy
name: Mbya Guarani Test accuracy
value: 34.1
- type: accuracy
name: Urdu Test accuracy
value: 61.9
- type: accuracy
name: Romanian Test accuracy
value: 83.8
- type: accuracy
name: Persian Test accuracy
value: 80.1
- type: accuracy
name: Apurina Test accuracy
value: 48.4
- type: accuracy
name: Japanese Test accuracy
value: 19.4
- type: accuracy
name: Hungarian Test accuracy
value: 79.1
- type: accuracy
name: Hindi Test accuracy
value: 65.8
- type: accuracy
name: Classical Chinese Test accuracy
value: 15.7
- type: accuracy
name: Komi Permyak Test accuracy
value: 49.2
- type: accuracy
name: Faroese Test accuracy
value: 76.1
- type: accuracy
name: Sanskrit Test accuracy
value: 35.8
- type: accuracy
name: Livvi Test accuracy
value: 65.9
- type: accuracy
name: Arabic Test accuracy
value: 80.4
- type: accuracy
name: Wolof Test accuracy
value: 38.2
- type: accuracy
name: Bulgarian Test accuracy
value: 91.9
- type: accuracy
name: Akuntsu Test accuracy
value: 38.0
- type: accuracy
name: Makurap Test accuracy
value: 21.2
- type: accuracy
name: Kangri Test accuracy
value: 48.4
- type: accuracy
name: Breton Test accuracy
value: 58.2
- type: accuracy
name: Telugu Test accuracy
value: 82.1
- type: accuracy
name: Cantonese Test accuracy
value: 37.2
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 48.4
- type: accuracy
name: Karelian Test accuracy
value: 69.5
- type: accuracy
name: Upper Sorbian Test accuracy
value: 81.0
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 68.7
- type: accuracy
name: Komi Zyrian Test accuracy
value: 44.4
- type: accuracy
name: Irish Test accuracy
value: 67.8
- type: accuracy
name: Nayini Test accuracy
value: 47.4
- type: accuracy
name: Munduruku Test accuracy
value: 28.1
- type: accuracy
name: Manx Test accuracy
value: 37.6
- type: accuracy
name: Skolt Sami Test accuracy
value: 40.2
- type: accuracy
name: Afrikaans Test accuracy
value: 78.2
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 43.5
- type: accuracy
name: Belarusian Test accuracy
value: 90.1
- type: accuracy
name: Serbian Test accuracy
value: 96.0
- type: accuracy
name: Moksha Test accuracy
value: 48.5
- type: accuracy
name: Western Armenian Test accuracy
value: 74.9
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 58.0
- type: accuracy
name: Khunsari Test accuracy
value: 39.2
- type: accuracy
name: Hebrew Test accuracy
value: 87.5
- type: accuracy
name: Uyghur Test accuracy
value: 72.1
- type: accuracy
name: Chukchi Test accuracy
value: 35.4
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Czech
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-cs")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-cs")
```
|
satkinson/DialoGPT-small-marvin | 8eb5821925da948c44ae8de53f02f6a62a25e839 | 2022-04-26T16:30:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational"
]
| conversational | false | satkinson | null | satkinson/DialoGPT-small-marvin | 9 | 0 | transformers | 12,437 | ---
language:
- en
thumbnail:
tags:
- conversational
metrics:
- perplexity
---
## DialoGPT model fine-tuned using Amazon's Topical Chat Dataset
This model is fine-tuned from the original [DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
This model was fine-tuned on a subset of messages from [Amazon's Topical Chat dataset](https://www.kaggle.com/arnavsharmaas/chatbot-dataset-topical-chat) (due to processing limitations, I restricted my training data to 50,000 observations.
The dataset spands 8 broad topics and contains conversation partner who do not have defined roles. It was created with the goal of [aiding in the effort to build a socialbot that can have deep, engaging open-domain conversations with humans](https://m.media-amazon.com/images/G/01/amazon.jobs/3079_Paper._CB1565131710_.pdf).
The eight broad topics are:
- fashion
- politics
- books
- sports
- general entertainment
- music
- science and technology
- movies
<br><br>
<b>Sample dialogue:</b>
|Role | Response |
|---------|--------|
|User: | Do you like Kanye? |
| Bot: | He is okay, I like some of his music |
|User: | Any other thoughts on him? |
| Bot: | He is not a great person, but I like some of his music, so I'm not too surprised |
|User: | What does that say about you? |
| Bot: | I think he's trying to be funny. It's just that he's always out there and out there. Always out there. That's all I'll say. |
|User: | You're strange. |
| Bot: | Yeah, I guess so. |
## Using the model
Example code for trying out the model (taken directly from the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model card):
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("satkinson/DialoGPT-small-marvin")
model = AutoModelWithLMHead.from_pretrained("satkinson/DialoGPT-small-marvin")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
ffalcao/distilbert-base-uncased-finetuned-emotion | 2ef43eb9bc63528fe1249f7886dc8196955f3fb8 | 2022-02-25T13:12:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ffalcao | null | ffalcao/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,438 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246964318251509
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8356 | 1.0 | 250 | 0.3296 | 0.901 | 0.8977 |
| 0.254 | 2.0 | 500 | 0.2237 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
frozenwalker/T5_pubmedqa_question_generation | 2d6c195b7ec6bf6eac55b74889fc44eb5728540d | 2022-04-17T11:00:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | frozenwalker | null | frozenwalker/T5_pubmedqa_question_generation | 9 | null | transformers | 12,439 | Entry not found |
ghadeermobasher/BC5CDR-Disease-imbalanced-scibert_scivocab_uncased | 86a790d36329be689d13e6a6e0bf3291e8f8d548 | 2022-02-25T18:32:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Disease-imbalanced-scibert_scivocab_uncased | 9 | null | transformers | 12,440 | Entry not found |
ghadeermobasher/BC5CDR-Disease-imbalanced-biobert-v1.1 | c0d4753e9dee7ee4545e5414461e9a30ea49de3f | 2022-02-25T18:29:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Disease-imbalanced-biobert-v1.1 | 9 | null | transformers | 12,441 | Entry not found |
ghadeermobasher/BC5CDR-Disease-imbalanced-BiomedNLP-PubMedBERT-base-uncased-abstract | c8b4b4485f987ebc70b78fce86b59fcf29b7dc2e | 2022-02-25T18:37:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Disease-imbalanced-BiomedNLP-PubMedBERT-base-uncased-abstract | 9 | null | transformers | 12,442 | Entry not found |
anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-8 | 7cb9b8d75f7486001a36167d2cc3084ffe027bfa | 2022-02-25T23:13:55.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | anas-awadalla | null | anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-8 | 9 | null | transformers | 12,443 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-8
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 | 6ab5ce27aabaad71acb6d976c5479ab414ae3483 | 2022-02-26T03:36:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 | 9 | null | transformers | 12,444 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4676
- Accuracy: 0.8299
- F1: 0.8892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 |
| 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 |
| 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 |
| 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
bookbot/wav2vec2-xls-r-adult-child-cls | 7535f3cdfcb836ac102621e1c0ff01256850acef | 2022-02-26T13:41:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"en",
"arxiv:2111.09296",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | bookbot | null | bookbot/wav2vec2-xls-r-adult-child-cls | 9 | null | transformers | 12,445 | ---
language: en
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: wav2vec2-xls-r-adult-child-cls
results: []
---
# Wav2Vec2 XLS-R Adult/Child Speech Classifier
Wav2Vec2 XLS-R Adult/Child Speech Classifier is an audio classification model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on a private adult/child speech classification dataset.
This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| -------------------------------- | ------- | ----- | ----------------------------------------- |
| `wav2vec2-xls-r-adult-child-cls` | 300M | XLS-R | Adult/Child Speech Classification Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | Accuracy | F1 |
| --------------------------------- | ------ | -------- | ------ |
| Adult/Child Speech Classification | 0.1851 | 94.69% | 0.9508 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 3e-05
- `train_batch_size`: 8
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 32
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_ratio`: 0.1
- `num_epochs`: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
| :-----------: | :---: | :--: | :-------------: | :------: | :----: |
| 0.2906 | 1.0 | 383 | 0.1856 | 0.9372 | 0.9421 |
| 0.1749 | 2.0 | 766 | 0.1925 | 0.9418 | 0.9465 |
| 0.1681 | 3.0 | 1149 | 0.1893 | 0.9414 | 0.9459 |
| 0.1295 | 4.0 | 1532 | 0.1851 | 0.9469 | 0.9508 |
| 0.2031 | 5.0 | 1915 | 0.1944 | 0.9423 | 0.9460 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Kaggle.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
spy24/autonlp-paraphrasing-607217177 | 265d4d87ba9e442fc55bcacb813c5259fe4c8c86 | 2022-03-02T14:26:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-paraphrasing",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | spy24 | null | spy24/autonlp-paraphrasing-607217177 | 9 | 1 | transformers | 12,446 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-paraphrasing
co2_eq_emissions: 193.70003779879124
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 607217177
- CO2 Emissions (in grams): 193.70003779879124
## Validation Metrics
- Loss: 1.2881609201431274
- Rouge1: 48.3375
- Rouge2: 25.9756
- RougeL: 42.2748
- RougeLsum: 42.2797
- Gen Len: 18.4359
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-paraphrasing-607217177
``` |
ncoop57/cm_code_clippy | 5a6f60cca7ab1c0e13fe5bbde7337b4bd3edd4cb | 2022-03-10T21:48:16.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | ncoop57 | null | ncoop57/cm_code_clippy | 9 | null | transformers | 12,447 | Entry not found |
jcai1/sentence_similarity_concierge | 18e4d07cb8c19ec981b90fd45dfcbd8ea6f89a2e | 2022-03-02T15:04:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jcai1 | null | jcai1/sentence_similarity_concierge | 9 | 1 | transformers | 12,448 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sentence_similarity_concierge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence_similarity_concierge
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1165
- Accuracy: 0.9748
- F1: 0.9680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 402 | 0.2334 | 0.9412 | 0.9263 |
| 0.2834 | 2.0 | 804 | 0.1656 | 0.9608 | 0.9493 |
| 0.1073 | 3.0 | 1206 | 0.1165 | 0.9748 | 0.9680 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
pjheslin/distilbert-base-uncased-finetuned-emotion | 6118e6bb58ff01e6baef25c8ea4abd42374281df | 2022-03-02T22:49:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | pjheslin | null | pjheslin/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,449 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254862165828515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2227
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8417 | 1.0 | 250 | 0.3260 | 0.9045 | 0.9006 |
| 0.2569 | 2.0 | 500 | 0.2227 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
yoavgur/gpt2-bash-history-baseline3 | 4ddad5b343e7d818ead92a4fb6367c09f0ca0158 | 2022-03-03T00:54:17.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | yoavgur | null | yoavgur/gpt2-bash-history-baseline3 | 9 | null | transformers | 12,450 | Entry not found |
hf-internal-testing/tiny-random-data2vec-audio-frame | 19d95e36bfd3610b469b899d5d17e4edab81b70b | 2022-03-03T12:25:43.000Z | [
"pytorch",
"data2vec-audio",
"audio-frame-classification",
"transformers"
]
| null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-data2vec-audio-frame | 9 | null | transformers | 12,451 | Entry not found |
Kevincp560/distilbart-xsum-12-1-finetuned-pubmed | ed7252ee2b8279dd3caa007cdcbeb13b16aa0b7f | 2022-03-05T00:06:55.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Kevincp560 | null | Kevincp560/distilbart-xsum-12-1-finetuned-pubmed | 9 | null | transformers | 12,452 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: distilbart-xsum-12-1-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 27.0012
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-xsum-12-1-finetuned-pubmed
This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-1](https://huggingface.co/sshleifer/distilbart-xsum-12-1) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8236
- Rouge1: 27.0012
- Rouge2: 12.728
- Rougel: 19.8685
- Rougelsum: 25.0485
- Gen Len: 59.969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.3604 | 1.0 | 4000 | 3.1575 | 25.0078 | 11.5381 | 18.4246 | 23.1605 | 54.8935 |
| 3.0697 | 2.0 | 8000 | 2.9478 | 26.4947 | 12.5411 | 19.4328 | 24.6123 | 57.948 |
| 2.8638 | 3.0 | 12000 | 2.8672 | 26.8856 | 12.7568 | 19.8949 | 24.8745 | 59.6245 |
| 2.7243 | 4.0 | 16000 | 2.8347 | 26.7347 | 12.5152 | 19.6516 | 24.7756 | 60.439 |
| 2.6072 | 5.0 | 20000 | 2.8236 | 27.0012 | 12.728 | 19.8685 | 25.0485 | 59.969 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
giggio/Far75BrBERT-base | 8538f365846a39de8beac71a01d85c8959c27bd6 | 2022-03-07T01:20:19.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | giggio | null | giggio/Far75BrBERT-base | 9 | null | transformers | 12,453 | Entry not found |
kevinjesse/bert-MT4TS | c18662009a6653d291aeabcf5b1b16929941b5ac | 2022-03-09T21:06:34.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | kevinjesse | null | kevinjesse/bert-MT4TS | 9 | null | transformers | 12,454 | Entry not found |
SuperAI2-Machima/mt5-small-thai_translation_th-en_en-th | 47eb0fd41bf8e87d6d7c4bf2f7f671f66f3d013b | 2022-03-09T01:21:31.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | SuperAI2-Machima | null | SuperAI2-Machima/mt5-small-thai_translation_th-en_en-th | 9 | null | transformers | 12,455 | Entry not found |
kyleinincubated/autonlp-abbb-622117836 | 659190462097b1eb3d9c44c3610c49b52fdff618 | 2022-03-09T09:30:07.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:kyleinincubated/autonlp-data-abbb",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | kyleinincubated | null | kyleinincubated/autonlp-abbb-622117836 | 9 | null | transformers | 12,456 | ---
tags: autonlp
language: zh
widget:
- text: "I love AutoNLP 🤗"
datasets:
- kyleinincubated/autonlp-data-abbb
co2_eq_emissions: 2.22514962526191
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 622117836
- CO2 Emissions (in grams): 2.22514962526191
## Validation Metrics
- Loss: 1.2368708848953247
- Accuracy: 0.7973333333333333
- Macro F1: 0.46009076588978487
- Micro F1: 0.7973333333333333
- Weighted F1: 0.7712349116681224
- Macro Precision: 0.4527155928883903
- Micro Precision: 0.7973333333333333
- Weighted Precision: 0.7610710955220162
- Macro Recall: 0.4947868561369568
- Micro Recall: 0.7973333333333333
- Weighted Recall: 0.7973333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kyleinincubated/autonlp-abbb-622117836
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("kyleinincubated/autonlp-abbb-622117836", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kyleinincubated/autonlp-abbb-622117836", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
OrfeasTsk/bert-base-uncased-finetuned-quac | 0b0ca02059b9c393bb0a4c0c359fb9c2fa3612e8 | 2022-03-09T13:03:49.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | OrfeasTsk | null | OrfeasTsk/bert-base-uncased-finetuned-quac | 9 | null | transformers | 12,457 | { 'max_seq_length': 384,
'batch_size': 8,
'learning_rate': {'val': 5e-5, 'schelduler': 'Linear'},
'max_clip_norm': None,
'epochs': 2
} |
Chijioke/autonlp-mono-625317956 | 189c02686273d207012c2f935c1a723ff7d6b321 | 2022-03-10T12:46:27.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Chijioke/autonlp-data-mono",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Chijioke | null | Chijioke/autonlp-mono-625317956 | 9 | null | transformers | 12,458 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Chijioke/autonlp-data-mono
co2_eq_emissions: 1.1406456838043837
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 625317956
- CO2 Emissions (in grams): 1.1406456838043837
## Validation Metrics
- Loss: 0.513037919998169
- Accuracy: 0.8982035928143712
- Macro F1: 0.7843756230226546
- Micro F1: 0.8982035928143712
- Weighted F1: 0.8891653474608059
- Macro Precision: 0.8210878091622635
- Micro Precision: 0.8982035928143712
- Weighted Precision: 0.8888857327766032
- Macro Recall: 0.7731018645485747
- Micro Recall: 0.8982035928143712
- Weighted Recall: 0.8982035928143712
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Chijioke/autonlp-mono-625317956
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Chijioke/autonlp-mono-625317956", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Chijioke/autonlp-mono-625317956", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
edbeeching/decision-transformer-gym-halfcheetah-expert | f08afb8814076fbf1ac8651f37ff2ba12fce5d7b | 2022-06-29T19:20:32.000Z | [
"pytorch",
"decision_transformer",
"feature-extraction",
"arxiv:2106.01345",
"transformers",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control"
]
| reinforcement-learning | false | edbeeching | null | edbeeching/decision-transformer-gym-halfcheetah-expert | 9 | null | transformers | 12,459 | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on expert trajectories sampled from the Gym HalfCheetah environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on expert trajectories sampled from the Gym HalfCheetah environment.
The following normlization coeficients are required to use this model:
mean = [ -0.04489148, 0.03232588, 0.06034835, -0.17081226, -0.19480659, -0.05751596, 0.09701628, 0.03239211, 11.047426, -0.07997331, -0.32363534, 0.36297753, 0.42322603, 0.40836546, 1.1085187, -0.4874403, -0.0737481 ]
std = [0.04002118, 0.4107858, 0.54217845, 0.41522816, 0.23796624, 0.62036866, 0.30100912, 0.21737163, 2.2105937, 0.572586, 1.7255033, 11.844218, 12.06324, 7.0495934, 13.499867, 7.195647, 5.0264325]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
malteos/aspect-scibert-method | 7e5373a774473ef7e873a0a4e317db74af5b79bb | 2022-03-16T13:50:16.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:mit"
]
| feature-extraction | false | malteos | null | malteos/aspect-scibert-method | 9 | null | transformers | 12,460 | ---
license: mit
---
|
facebook/regnet-y-002 | 72839f0679788150eb2b80d4a6a0f778c1a502c9 | 2022-06-30T10:22:35.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | facebook | null | facebook/regnet-y-002 | 9 | null | transformers | 12,461 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
facebook/regnet-y-1280-seer | 70e68f5410f2279646644a0d729b177818446759 | 2022-03-31T12:12:51.000Z | [
"pytorch",
"regnet",
"feature-extraction",
"arxiv:2202.08360",
"transformers",
"vision",
"license:apache-2.0"
]
| feature-extraction | false | facebook | null | facebook/regnet-y-1280-seer | 9 | null | transformers | 12,462 | ---
license: apache-2.0
tags:
- vision
widgets:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNetModel
RegNetModel model was introduced in the paper [Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision](https://arxiv.org/abs/2202.08360) and first released in [this repository](https://github.com/facebookresearch/vissl/tree/main/projects/SEER).
Disclaimer: The team releasing RegNetModel did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors trained [RegNets](https://huggingface.co/?models=regnet) models in a self-supervised fashion on bilion of random images from the internet

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetModel.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 1088, 7, 7]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
ronykroy/distilbert-base-uncased-finetuned-emotion | 54472afa8ad3e5b13897e24bc3128de0e2578335 | 2022-03-19T17:55:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ronykroy | null | ronykroy/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,463 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9222310284051585
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8454 | 1.0 | 250 | 0.3308 | 0.8975 | 0.8937 |
| 0.2561 | 2.0 | 500 | 0.2334 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
trig/multiverse-third | 2a1434644a6a9acbd1cbd05ed2dfa8ef6db3c5f1 | 2022-03-21T04:29:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | trig | null | trig/multiverse-third | 9 | null | transformers | 12,464 | ---
tags:
- conversational
---
# multiverse the third |
pinkducky/Ross_Bot | 9cc2839f21c8eba3a8607b1131adcf7e5abaf2e1 | 2022-03-21T04:53:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | pinkducky | null | pinkducky/Ross_Bot | 9 | null | transformers | 12,465 | ---
tags:
- conversational
---
# My Awesome Model
|
mukayese/bart-base-turkish-sum | f2edda1cf2c0ad23b55a9e7c8b4310fcba4fa276 | 2022-03-22T14:09:08.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:mlsum",
"arxiv:2203.01215",
"transformers",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mukayese | null | mukayese/bart-base-turkish-sum | 9 | 1 | transformers | 12,466 | ---
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: bart-base-turkish-sum
results:
- task:
name: Summarization
type: summarization
dataset:
name: mlsum tu
type: mlsum
args: tu
metrics:
- name: Rouge1
type: rouge
value: 43.2049
---
# [Mukayese: Turkish NLP Strikes Back](https://arxiv.org/abs/2203.01215)
## Summarization: mukayese/bart-base-turkish-sum
_This model is uncased_, it was initialized from scratch and trained only the mlsum/tu dataset with no pre-training.
It achieves the following results on the evaluation set:
- Rouge1: 43.2049
- Rouge2: 30.7082
- Rougel: 38.1981
- Rougelsum: 39.9453
Check [this](https://arxiv.org/abs/2203.01215) paper for more details on the model and the dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.2+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
### Citation
```
@misc{safaya-etal-2022-mukayese,
title={Mukayese: Turkish NLP Strikes Back},
author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret},
year={2022},
eprint={2203.01215},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
leonistor/distilbert-base-uncased-finetuned-emotion | afb1042b37cd0e3d1d8abb9306cee9acd4990b16 | 2022-03-22T13:45:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | leonistor | null | leonistor/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,467 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.917
- name: F1
type: f1
value: 0.9169673644092439
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2371
- Accuracy: 0.917
- F1: 0.9170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8733 | 1.0 | 250 | 0.3537 | 0.8955 | 0.8911 |
| 0.273 | 2.0 | 500 | 0.2371 | 0.917 | 0.9170 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rurupang/roberta-base-finetuned-sts | cb0c4a807dfc145a53735855d8cbea7d96ce0000 | 2022-03-24T01:54:26.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:klue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | rurupang | null | rurupang/roberta-base-finetuned-sts | 9 | null | transformers | 12,468 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
model-index:
- name: roberta-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.956039443806831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sts
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1999
- Pearsonr: 0.9560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 329 | 0.2462 | 0.9478 |
| 1.2505 | 2.0 | 658 | 0.1671 | 0.9530 |
| 1.2505 | 3.0 | 987 | 0.1890 | 0.9525 |
| 0.133 | 4.0 | 1316 | 0.2360 | 0.9548 |
| 0.0886 | 5.0 | 1645 | 0.2265 | 0.9528 |
| 0.0886 | 6.0 | 1974 | 0.2097 | 0.9518 |
| 0.0687 | 7.0 | 2303 | 0.2281 | 0.9523 |
| 0.0539 | 8.0 | 2632 | 0.2212 | 0.9542 |
| 0.0539 | 9.0 | 2961 | 0.1843 | 0.9532 |
| 0.045 | 10.0 | 3290 | 0.1999 | 0.9560 |
| 0.0378 | 11.0 | 3619 | 0.2357 | 0.9533 |
| 0.0378 | 12.0 | 3948 | 0.2134 | 0.9541 |
| 0.033 | 13.0 | 4277 | 0.2273 | 0.9540 |
| 0.03 | 14.0 | 4606 | 0.2148 | 0.9533 |
| 0.03 | 15.0 | 4935 | 0.2207 | 0.9534 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anthonny/dehatebert-mono-spanish-finetuned-sentiments_reviews_politicos | 1a96a5e29e111749c9bf28c0feab8630206a0d1b | 2022-03-22T17:57:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | anthonny | null | anthonny/dehatebert-mono-spanish-finetuned-sentiments_reviews_politicos | 9 | null | transformers | 12,469 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: robertuito-sentiment-analysis-hate-finetuned-sentiments_reviews_politicos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robertuito-sentiment-analysis-hate-finetuned-sentiments_reviews_politicos
This model is a fine-tuned version of [Hate-speech-CNERG/dehatebert-mono-spanish](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2559
- Accuracy: 0.9368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.29 | 1.0 | 3595 | 0.2559 | 0.9368 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
leixu/distilbert-base-uncased-finetuned-emotion | 88d8ec8983a2d76896a555638239130f001544ba | 2022-03-23T13:24:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | leixu | null | leixu/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,470 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9213209184585894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.921
- F1: 0.9213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8242 | 1.0 | 250 | 0.3108 | 0.9125 | 0.9107 |
| 0.2478 | 2.0 | 500 | 0.2183 | 0.921 | 0.9213 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jiexing/relation_t5_small | a3c118005f9f884a5e7a5d0a85c7472fe5174ec4 | 2022-03-24T01:58:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Jiexing | null | Jiexing/relation_t5_small | 9 | null | transformers | 12,471 | Entry not found |
FuriouslyAsleep/unhappyZebra100 | fbe1acc060471a780b68487c4e2d12944a07b1d0 | 2022-03-24T04:39:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:FuriouslyAsleep/autotrain-data-techDataClassifeier",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | FuriouslyAsleep | null | FuriouslyAsleep/unhappyZebra100 | 9 | null | transformers | 12,472 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- FuriouslyAsleep/autotrain-data-techDataClassifeier
co2_eq_emissions: 0.6969569001670619
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 664919631
- CO2 Emissions (in grams): 0.6969569001670619
## Validation Metrics
- Loss: 0.022509008646011353
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/FuriouslyAsleep/autotrain-techDataClassifeier-664919631
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("FuriouslyAsleep/autotrain-techDataClassifeier-664919631", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("FuriouslyAsleep/autotrain-techDataClassifeier-664919631", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
etomoscow/T5_paraphrase_detector | 64e687a9b011b8a97edb8ad309d21f33efdc47a2 | 2022-03-24T07:52:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | etomoscow | null | etomoscow/T5_paraphrase_detector | 9 | null | transformers | 12,473 | ---
license: afl-3.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [PAWS](https://github.com/google-research-datasets/paws) for paraphrase generation.
### Details of T5
The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (Binary Paraphrase Classification)
Dataset: ```PAWS``` [link](https://github.com/google-research-datasets/paws)
## Performance:
F1-score: 0.86
ROC-AUC score: 0.86
## Usage:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
# use GPU for better performance
device = torch.device('cuda')
tokenizer = T5Tokenizer.from_pretrained("etomoscow/T5_paraphrase_detector")
model = T5ForConditionalGeneration.from_pretrained("etomoscow/T5_paraphrase_detector").to(device)
text_1 = 'During her sophomore , junior and senior summers , she spent half of it with her Alaska team , and half playing , and living in Oregon .'
text_2 = 'During her second , junior and senior summers , she spent half of it with her Alaska team , half playing and living in Oregon.'
true_label = '1'
input_text = tokenizer.encode_plus(text_1 + ' <sep> ' + text_2, return_tensors='pt')
out = model.generate(input_text['input_ids'].to(device))
print(tokenizer.decode(out.squeeze(0), skip_special_tokens=True))
# 1
``` |
Helsinki-NLP/opus-mt-tc-base-uk-tr | 84f25309fe8c7bcb695fe146169ada96c3f940bb | 2022-06-01T13:10:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tr",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-uk-tr | 9 | null | transformers | 12,474 | ---
language:
- tr
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-uk-tr
results:
- task:
name: Translation ukr-tur
type: translation
args: ukr-tur
dataset:
name: flores101-devtest
type: flores_101
args: ukr tur devtest
metrics:
- name: BLEU
type: bleu
value: 20.5
- task:
name: Translation ukr-tur
type: translation
args: ukr-tur
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-tur
metrics:
- name: BLEU
type: bleu
value: 45.2
---
# opus-mt-tc-base-uk-tr
Neural machine translation model for translating from Ukrainian (uk) to Turkish (tr).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-07
* source language(s): ukr
* target language(s):
* valid target language labels:
* model: transformer-align
* data: opusTCv20210807+pft ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pft_transformer-align_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opusTCv20210807+pft_transformer-align_2022-03-07.zip)
* more information released models: [OPUS-MT ukr-tur README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-tur/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>><<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>tur<< Тисячі єн достатньо?",
">>tur<< Цюріх — місто у Швейцарії."
]
model_name = "pytorch-models/opus-mt-tc-base-uk-tr"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Binlerce yen yeterli mi?
# Zürih, İsviçre'de bir şehirdir.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-uk-tr")
print(pipe(">>tur<< Тисячі єн достатньо?"))
# expected output: Binlerce yen yeterli mi?
```
## Benchmarks
* test set translations: [opusTCv20210807+pft_transformer-align_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opusTCv20210807+pft_transformer-align_2022-03-07.test.txt)
* test set scores: [opusTCv20210807+pft_transformer-align_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opusTCv20210807+pft_transformer-align_2022-03-07.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ukr-tur | tatoeba-test-v2021-08-07 | 0.70938 | 45.2 | 2520 | 11927 |
| ukr-tur | flores101-devtest | 0.54001 | 20.5 | 1012 | 20253 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Wed Mar 23 22:02:24 EET 2022
* port machine: LM0-400-22516.local
|
blinoff/ru-gpt2-medium-rdf-2-text | 02ae5d11b58300022860ca02cfbaf90aa81ca38c | 2022-04-25T10:50:23.000Z | [
"pytorch",
"gpt2",
"ru",
"transformers"
]
| null | false | blinoff | null | blinoff/ru-gpt2-medium-rdf-2-text | 9 | null | transformers | 12,475 | ---
language:
- ru
---
Russian GPT2-medium model for RDF-triplet to text conversion.
https://github.com/pavel-blinov/ru-rdf2text
```
@inproceedings{blinov-2020-semantic,
title = "Semantic Triples Verbalization with Generative Pre-Training Model",
author = "Blinov, Pavel",
booktitle = "Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)",
month = "12",
year = "2020",
address = "Dublin, Ireland (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.webnlg-1.17",
pages = "154--158",
abstract = "The paper devoted to the problem of automatic text generation from RDF triples. This problem was formalized and proposed as a part of the 2020 WebNLG challenge. We describe our approach to the RDF-to-text generation task based on a neural network model with the Generative Pre-Training (GPT-2) architecture. In particular, we outline a way of base GPT-2 model conversion to a model with language and classification heads and discuss the text generation methods. To research the parameters{'} influence on the end-task performance a series of experiments was carried out. We report the result metrics and conclude with possible improvement directions.",
}
```
|
vumichien/question-answering-bigbird-roberta-base | 3b58a2f584e1b8eb6aeb7d58c2f07aefd214fc1b | 2022-03-25T03:46:32.000Z | [
"pytorch",
"big_bird",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | vumichien | null | vumichien/question-answering-bigbird-roberta-base | 9 | null | transformers | 12,476 | Entry not found |
DMetaSoul/sbert-chinese-dtm-domain-v1 | 4fa873824ae9625bad183aba723b20b97547d33f | 2022-04-04T07:25:03.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese"
]
| sentence-similarity | false | DMetaSoul | null | DMetaSoul/sbert-chinese-dtm-domain-v1 | 9 | null | sentence-transformers | 12,477 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-dtm-domain-v1
此模型基于 [bert-base-chinese](https://huggingface.co/bert-base-chinese) 版本 BERT 模型,在 OPPO 手机助手小布对话匹配数据集([BUSTM](https://github.com/xiaobu-coai/BUSTM))上进行训练调优,适用于**开放领域的对话匹配**场景(偏口语化),比如:
- 哪有好玩的 VS. 这附近有什么好玩的地方
- 定时25分钟 VS. 计时半个小时
- 我要听王琦的歌 VS. 放一首王琦的歌
注:此模型的[轻量化版本](https://huggingface.co/DMetaSoul/sbert-chinese-dtm-domain-v1-distill),也已经开源啦!
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-dtm-domain-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-dtm-domain-v1')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-dtm-domain-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
该模型在公开的几个语义匹配数据集上进行了评测,计算了向量相似度跟真实标签之间的相关性系数:
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** |
| ------------------------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- |
| **sbert-chinese-dtm-domain-v1** | 78.36% | 74.46% | 32.18% | 75.95% | 44.01% | 14.50% | 66.85% |
## Citing & Authors
E-mail: [email protected] |
Supreeth/BioBERT | a25da3b697007ec7581ef350415a7ea1ba520b4a | 2022-03-26T06:07:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Supreeth | null | Supreeth/BioBERT | 9 | null | transformers | 12,478 | Entry not found |
aytugkaya/python-gpt2-large-issues-128 | 155725a00fb948132d1cf30d2a064f5d4941cfcf | 2022-03-28T01:49:49.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | aytugkaya | null | aytugkaya/python-gpt2-large-issues-128 | 9 | null | transformers | 12,479 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: python-gpt2-large-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# python-gpt2-large-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9843 | 1.0 | 1163 | 1.6715 |
| 1.5713 | 2.0 | 2326 | 1.4301 |
| 1.4226 | 3.0 | 3489 | 1.3808 |
| 1.332 | 4.0 | 4652 | 1.3806 |
| 1.2708 | 5.0 | 5815 | 1.2737 |
| 1.2089 | 6.0 | 6978 | 1.2354 |
| 1.167 | 7.0 | 8141 | 1.2250 |
| 1.126 | 8.0 | 9304 | 1.2262 |
| 1.0846 | 9.0 | 10467 | 1.1891 |
| 1.0647 | 10.0 | 11630 | 1.2263 |
| 1.0301 | 11.0 | 12793 | 1.1383 |
| 1.0054 | 12.0 | 13956 | 1.0922 |
| 0.9714 | 13.0 | 15119 | 1.1141 |
| 0.9713 | 14.0 | 16282 | 1.1614 |
| 0.9362 | 15.0 | 17445 | 1.0753 |
| 0.9382 | 16.0 | 18608 | 1.2286 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
jkhan447/sentiment-model-sample-offline-goemotion | 77ece19a9ae0b0e4fc181680fc3155d4e5735686 | 2022-03-28T06:50:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jkhan447 | null | jkhan447/sentiment-model-sample-offline-goemotion | 9 | null | transformers | 12,480 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-offline-goemotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-offline-goemotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0183
- Accuracy: 0.7109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
MU-NLPC/CzeGPT-2_summarizer | 2cb27636aed30dab497683d3bb9052e8bef72cc4 | 2022-05-17T15:49:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"cs",
"dataset:csTenTen17",
"transformers",
"license:cc-by-nc-sa-4.0"
]
| text-generation | false | MU-NLPC | null | MU-NLPC/CzeGPT-2_summarizer | 9 | null | transformers | 12,481 | ---
language: cs
license: cc-by-nc-sa-4.0
datasets:
- csTenTen17
---
# CzeGPT-2_summarizer
CzeGPT-2 summarizer is a Czech summarizer built upon the <a href="https://huggingface.co/MU-NLPC/CzeGPT-2">CzeGPT-2</a> model. The model has the same architectural dimensions as the GPT-2 small (12 layers, 12 heads, 1024 tokens on input/output, and embedding vectors with 768 dimensions) resulting in 124M trainable parameters. It was fine-tuned and evaluated on the <a href="https://aclanthology.org/L18-1551.pdf">SumeCzech</a> summarization dataset containing about 1M Czech news articles.
The model is trained to generate the summary as long as you let it (or it runs out of sequence length). This leaves a space for developers to set their own constraints.
## Tokenizer
Along, we also provide a Czech trained tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase and fine-tuning. It is the byte-level BPE tokenizer as used in the original GPT-2 paper.
## Training results
The model was evaluated on the *test* and *ood-test* partitions of the SumeCzech dataset and compared to the best summarizers yet evaluated on this benchmark (the results taken from <a href="https://ufal.mff.cuni.cz/sumeczech">here</a>).
The abstract generator yields three sentences that roughly correspond to 40 token average length of abstracts in the SumeCzech. This length of summary was also confirmed by tuning on the validation set.
We manage to reach state-of-the art on most standard metrics.
Test set
| Model | ROUGE<sub>RAW</sub>-1 | ROUGE<sub>RAW</sub>-2 | ROUGE<sub>RAW</sub>-L |
| :---: | :------: | :-----: | :-----: |
| CzeGPT-2 | **18.0**/18.7/**17.8** | **3.5**/**3.7**/**3.5** | **12.6**/13.3/**12.5** |
| First | 13.1/17.9/14.4 | 1.9/2.8/2.1 | 8.8/12.0/9.6 |
| TextRank | 11.1/**20.8**/13.8 | 1.6/3.1/2.0 | 7.1/**13.4**/8.9 |
|Tensor2Tensor | 13.2/10.5/11.3 | 1.2/0.9/1.0 | 10.2/8.1/8.7 |
OOD test set
| Model | ROUGE<sub>RAW</sub>-1 | ROUGE<sub>RAW</sub>-2 | ROUGE<sub>RAW</sub>-L |
| :---: | :------: | :-----: | :-----: |
|CzeGPT-2 | **16.2**/18.5/**16.7** | **3.1**/**3.7**/**3.2** | **11.5**/**13.3**/**11.9** |
|First | 11.1/17.1/12.7 | 1.6/2.7/1.9 | 7.6/11.7/8.7 |
|TextRank | 9.8/**19.9**/12.5 | 1.5/3.3/2.0 | 6.6/**13.3**/8.4 |
|Tensor2Tensor | 12.5/9.4/10.3 | 0.8/0.6/0.6 | 9.8/7.5/8.1 |
The numbers in the tables denote *precision/recall/F1-score*
## Error Analysis
As we think the current standard ROUGE<sub>RAW</sub> metric is not suitable enough for the summarization task (even though it is the best we have at the time), we performed also a manual error analysis of the generated summaries using human annotators. You can find more about the methodology and results in our paper referenced at the bottom of this card.
## Running the predictions
The repository includes a simple Jupyter Notebook that can help with first steps when using the model.
## Headline generator
See also our model fine-tuned for <a href="https://huggingface.co/MU-NLPC/CzeGPT-2_headline_generator">headline generation task</a>.
## How to cite
@unpublished{hajek_horak2022,<br>
author = "Adam Hájek and Aleš Horák",<br>
title = "CzeGPT-2 – New Model for Czech Summarization Task",<br>
note = "preprint available at \url{https://openreview.net/forum?id=H43eQtxZefq}",<br>
month = "3",<br>
year = "2022",<br>
} |
okep/distilbert-base-uncased-finetuned-emotion | a2611ffde8be6486d057ef163960a369e4c773b7 | 2022-04-04T18:53:56.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | okep | null | okep/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,482 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245483619750937
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2269
- Accuracy: 0.9245
- F1: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.853 | 1.0 | 250 | 0.3507 | 0.8925 | 0.8883 |
| 0.2667 | 2.0 | 500 | 0.2269 | 0.9245 | 0.9245 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tbosse/bert-base-german-cased-finetuned-subj | dbe0faa3ecb91e3d7d27cba129a9d153b5d02acc | 2022-03-28T22:50:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj | 9 | null | transformers | 12,483 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1424
- Precision: 0.6514
- Recall: 0.0186
- F1: 0.0363
- Accuracy: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 140 | 0.1588 | 0.6 | 0.0016 | 0.0031 | 0.9507 |
| No log | 2.0 | 280 | 0.1466 | 0.75 | 0.0039 | 0.0078 | 0.9508 |
| No log | 3.0 | 420 | 0.1424 | 0.6514 | 0.0186 | 0.0363 | 0.9511 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lijingxin/mt5-for-zh-paraphrase | efa6da418f1ff1cd3b6c9719578e62c9413818e3 | 2022-03-29T03:43:04.000Z | [
"pytorch"
]
| null | false | lijingxin | null | lijingxin/mt5-for-zh-paraphrase | 9 | null | null | 12,484 | Entry not found |
shalpin87/dialoGPT-homer-simpson | bd850cf335aca301770f1e69e7e519a72778db03 | 2022-03-30T13:06:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:1911.00536",
"transformers",
"conversational",
"license:mit"
]
| conversational | false | shalpin87 | null | shalpin87/dialoGPT-homer-simpson | 9 | null | transformers | 12,485 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## dialogGPT-homer-simpson
This model has been fine tuned with the entire scripts of Homer Simpson from the T.V. show The Simpsons
It will give some nice answers seemingly from Homers brain in the Simpsons Universe during single turn conversation, letting you chat to Homer Simpson
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Who are you? |
| HomerBot | Homer Simpson .|
|User | What is your favorite Restaurant ? |
| HomerBot | Moes Tavern. |
|User | Have you ever been in a band?! |
| HomerBot | no. |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use Multi-Turn
#### NOTE: Multi-Turn seems to be broken, after a few exchanges the output will mostly be exclamation marks.
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("shalpin87/dialoGPT-homer-simpson")
model = AutoModelForCausalLM.from_pretrained("shalpin87/dialoGPT-homer-simpson")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoG-PT-HomerBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
### How to use Single Turn
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("shalpin87/dialoGPT-homer-simpson")
model = AutoModelForCausalLM.from_pretrained("shalpin87/dialoGPT-homer-simpson")
questions = [
"What is your name?",
"Who are you?",
"Where do you work?",
"Who really killed Mr Burns?",
"Have you ever stolen from the Kwik-E-Mart?",
"Did you kill Frank Grimes?",
"Who was the worst member of the Be Sharps?",
"Hey where did Barney go?",
"What is your favorite bar to have a beer?",
"What is the best beer in Springfield?",
"Is Bart working for the Mob?",
"I think there was an incident in sector 7 G",
"Is Ned Flanders house okay?",
"Oh my god it's Sideshow Bob",
"What is a Flaming Moe?",
"What is happening to Apu?",
"Who quit the band?"
]
# Let's chat
for step in range(len(questions)):
print("***************************************")
print("Q. {}".format(questions[step]))
new_user_input_ids = tokenizer.encode(questions[step] + tokenizer.eos_token, return_tensors='pt')
# new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
bot_input_ids = new_user_input_ids
num_return_seqs=1
chat_history_ids = model.generate(
bot_input_ids,
max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=200,
top_p=0.55,
temperature = 0.85,
num_return_sequences=num_return_seqs
)
botname = "HomerBot"
for i in range(0,num_return_seqs):
print("{}:{}: {}".format(i, botname, tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][i], skip_special_tokens=True)))
chat_history_ids = []
```
Sample Output
```
***************************************
Q. What is your name?
0:HomerBot: Homer Simpson.
***************************************
Q. Who are you?
0:HomerBot: Homer Simpson.
***************************************
Q. Where do you work?
0:HomerBot: I work at a nuclear power plant.
***************************************
Q. Who really killed Mr Burns?
0:HomerBot: I dunno. Some guy named Ot-to.
***************************************
Q. Have you ever stolen from the Kwik-E-Mart?
0:HomerBot: No.
***************************************
Q. Did you kill Frank Grimes?
0:HomerBot: No.
***************************************
Q. Who was the worst member of the Be Sharps?
0:HomerBot: Homer Simpson, sir.
***************************************
Q. Hey where did Barney go?
0:HomerBot: He's been eating that thing for a week.
***************************************
Q. What is your favorite bar to have a beer?
0:HomerBot: The Flanders' Club.
***************************************
Q. What is the best beer in Springfield?
0:HomerBot: Pepi!
***************************************
Q. Is Bart working for the Mob?
0:HomerBot: I think he's trying to tell us something.
***************************************
Q. I think there was an incident in sector 7 G
0:HomerBot: Uh, let's not jump to any conclusions. The dog didn't run away.
``` |
anwarvic/distilbert-base-uncased-for-fakenews | 6035830acc96cea12694e4a4cfeb7e381443632b | 2022-04-01T19:12:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"transformers",
"exbert",
"license:apache-2.0"
]
| text-classification | false | anwarvic | null | anwarvic/distilbert-base-uncased-for-fakenews | 9 | null | transformers | 12,486 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT (uncased) for FaceNews Classification
This model is a classification model built by fine-tuning
[DistilBERT base model](https://huggingface.co/distilbert-base-uncased).
This model was trained using
[fake-and-real-news-dataset](https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset)
for five epochs.
> **NOTE:**
This model is just a POC (proof-of-concept) for a fellowship I was applying for.
## Intended uses & limitations
Note that this model is primarily aimed at classifying an article to either
"Fake" or "Real".
### How to use
Check this [notebook](https://www.kaggle.com/code/mohamedanwarvic/fakenewsclassifier-fatima-fellowship) on Kaggle. |
princeton-nlp/CoFi-SST2-s60 | 706ed580f4443b40ee37c439af8fd5b4ff6e2ce0 | 2022-05-01T01:19:08.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"transformers"
]
| text-classification | false | princeton-nlp | null | princeton-nlp/CoFi-SST2-s60 | 9 | null | transformers | 12,487 | This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset SST-2. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
royam0820/distilbert-base-uncased-finetuned-emotion | e5249f978b261982ed4bb00a122a8ce801b63fac | 2022-03-30T15:24:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | royam0820 | null | royam0820/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,488 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9217461464484151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2320
- Accuracy: 0.9215
- F1: 0.9217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8452 | 1.0 | 250 | 0.3418 | 0.897 | 0.8933 |
| 0.2596 | 2.0 | 500 | 0.2320 | 0.9215 | 0.9217 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
novarac23/distilbert-base-uncased-finetuned-emotion | 79a7449cdd5c534483970f2537bc426fa180ad7f | 2022-03-31T19:39:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | novarac23 | null | novarac23/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,489 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251919899321654
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2234
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8213 | 1.0 | 250 | 0.3210 | 0.9025 | 0.8989 |
| 0.2463 | 2.0 | 500 | 0.2234 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
deepakvk/xlnet-base-cased-squad2 | ada2c2aecded6ded16e60c6a6ba76f688d9c36ef | 2022-04-02T13:55:51.000Z | [
"pytorch",
"xlnet",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | deepakvk | null | deepakvk/xlnet-base-cased-squad2 | 9 | null | transformers | 12,490 | Entry not found |
maxhilsdorf/distilbert-base-uncased-finetuned-emotion | bd69bd2837ef9c3210fe6ae929ecd15d4f948d5e | 2022-04-03T12:08:54.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | maxhilsdorf | null | maxhilsdorf/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,491 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2991
- eval_accuracy: 0.91
- eval_f1: 0.9083
- eval_runtime: 3.258
- eval_samples_per_second: 613.873
- eval_steps_per_second: 9.822
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.14.1
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.10.3
|
tbosse/bert-base-german-cased-finetuned-subj_v2_v1 | 451038c939cad113c5223f3c0c3b095d42cb6ca1 | 2022-04-03T19:15:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_v2_v1 | 9 | null | transformers | 12,492 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v2_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v2_v1
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1587
- Precision: 0.2222
- Recall: 0.0107
- F1: 0.0204
- Accuracy: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 136 | 0.1569 | 0.6667 | 0.0053 | 0.0106 | 0.9522 |
| No log | 2.0 | 272 | 0.1562 | 0.1667 | 0.0053 | 0.0103 | 0.9513 |
| No log | 3.0 | 408 | 0.1587 | 0.2222 | 0.0107 | 0.0204 | 0.9511 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
horychtom/czech_media_bias_classifier | d61680e6a1d8f522b7ff5e745ec8f52cb59d7793 | 2022-04-28T13:51:18.000Z | [
"pytorch",
"bert",
"text-classification",
"cs",
"transformers",
"Czech"
]
| text-classification | false | horychtom | null | horychtom/czech_media_bias_classifier | 9 | null | transformers | 12,493 | ---
inference: false
language: "cs"
tags:
- Czech
---
## Czech Media Bias Classifier
A FERNET-C5 model fine-tuned to perform binary classification task on czech media bias detection. |
gagan3012/fake-news-fatima-fellowship | 62c1bee09458df221fab6c7f3d51fc776246276c | 2022-04-05T04:04:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | gagan3012 | null | gagan3012/fake-news-fatima-fellowship | 9 | null | transformers | 12,494 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fake-news-fatima-fellowship
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fake-news-fatima-fellowship
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.008 | 1.0 | 2514 | 0.0011 | 0.9996 | 0.9996 |
| 0.0004 | 2.0 | 5028 | 0.0000 | 1.0 | 1.0 |
| 0.0003 | 3.0 | 7542 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
tbosse/bert-base-german-cased-finetuned-subj_v3 | f1984f59006ced1d302f36781410167e6e1c34f1 | 2022-04-05T15:03:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_v3 | 9 | null | transformers | 12,495 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v3
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1790
- Precision: 0.1875
- Recall: 0.0079
- F1: 0.0152
- Accuracy: 0.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 136 | 0.1721 | 0.0 | 0.0 | 0.0 | 0.9488 |
| No log | 2.0 | 272 | 0.1731 | 0.0 | 0.0 | 0.0 | 0.9482 |
| No log | 3.0 | 408 | 0.1790 | 0.1875 | 0.0079 | 0.0152 | 0.9472 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Stremie/xlm-roberta-base-clickbait-keywords | a017b8efdbde0e619096cee39aff7b06031a8ad4 | 2022-04-18T12:51:55.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | Stremie | null | Stremie/xlm-roberta-base-clickbait-keywords | 9 | null | transformers | 12,496 | This model classifies whether a tweet is clickbait or not. It has been trained using [Webis-Clickbait-17](https://webis.de/data/webis-clickbait-17.html) dataset. Input is composed of 'postText' + '[SEP]' + 'targetKeywords'. Achieved ~0.7 F1-score on test data. |
efederici/it5-small-lfqa | 1d0171d8833873ce0b9e71f8f2329b1bfd849ec4 | 2022-04-09T16:20:02.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"it",
"dataset:custom",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | efederici | null | efederici/it5-small-lfqa | 9 | null | transformers | 12,497 | ---
language:
- it
datasets:
- custom
---
# it5-small-lfqa
It is a (test) T5 ([IT5](https://huggingface.co/gsarti/it5-small)) small model trained on a lfqa dataset.
<p align="center">
<img src="https://www.arthipo.com/image/cache/catalog/artists-painters/y/yayoi-kusama/yoiku378-Yayoi-Kusama-A-Circus-Rider's-Dream-837x1000.jpg" width="400"> </br>
Yayoi Kusama, A circus Rider's Dream, 1955
</p>
## Training Data
This model was trained on a lfqa dataset. The model provide long-form answers to open domain questions (maybe, at the moment. Make sure to use Flax model with from_flax=True).
## Usage and Performance
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("efederici/it5-small-lfqa", from_flax=True)
model = AutoModelForSeq2SeqLM.from_pretrained("efederici/it5-small-lfqa", from_flax=True)
query = "con chi si era messo in contatto elon musk?"
# concatenated texts/document text
doc = """
La notizia dell’acquisizione da parte di Elon Musk del 9,2 per cento delle azioni di Twitter e del suo successivo ingresso nel consiglio di amministrazione della società hanno attirato grandi attenzioni, non solo da parte degli analisti finanziari, ma anche di chi si occupa di social media e del modo in cui viene impiegata la piattaforma da centinaia di milioni di persone in tutto il mondo. Musk, che ha un grande seguito su Twitter, in passato aveva più volte criticato il social network, accusandolo di non tutelare a sufficienza le libertà di espressione, anche in casi limite come l’assalto al Congresso degli Stati Uniti del 2021.
Alcune settimane fa, Musk si era messo in contatto con Parag Agrawal, CEO di Twitter da fine novembre 2021, e con il suo predecessore e cofondatore della società, Jack Dorsey, annunciando di avere avviato l’acquisizione di alcune quote dell’azienda e di essere disponibile per discutere di soluzioni per migliorarla. Secondo fonti del New York Times, dopo i primi contatti, Agrawal aveva proposto a Musk di avere un ruolo più attivo oltre a quello di azionista, offrendogli la possibilità di entrare nel consiglio di amministrazione.
"""
query_and_docs = f"Domanda: {query} Contesto: {doc}"
model_input = tokenizer(query_and_docs, truncation=True, padding=True, return_tensors="pt")
output = model.generate(input_ids=model_input["input_ids"],
attention_mask=model_input["attention_mask"],
min_length=10,
max_length=256,
do_sample=False,
early_stopping=True,
num_beams=8,
temperature=1.0,
top_k=None,
top_p=None,
no_repeat_ngram_size=3,
num_return_sequences=1)
tokenizer.batch_decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
The model will predict an answer. |
swayam01/hindi-clsril-100 | 2205f158613b9eedcff1d740e24815a321aefd23 | 2022-05-19T07:45:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:cc",
"model-index"
]
| automatic-speech-recognition | false | swayam01 | null | swayam01/hindi-clsril-100 | 9 | 1 | transformers | 12,498 | ---
language: hi
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: cc
model-index:
- name: Wav2Vec2 Hindi Model by Swayam Mittal
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hi
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 24.17
---
# hindi-clsril-100
Fine-tuned [Harveenchadha/wav2vec2-pretrained-clsril-23-10k](https://huggingface.co/Harveenchadha/wav2vec2-pretrained-clsril-23-10k) on Hindi using the [Common Voice](https://huggingface.co/datasets/common_voice), included [openSLR](http://www.openslr.org/103/) Hindi dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Evaluation
The model can be used directly (with or without a language model) as follows:
```python
#!pip install datasets==1.4.1
#!pip install transformers==4.4.0
#!pip install torchaudio
#!pip install jiwer
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM
processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("swayam01/hindi-clsril-100")
model = Wav2Vec2ForCTC.from_pretrained("swayam01/hindi-clsril-100")
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\�\।\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor_with_lm(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
batch["pred_strings"] = transcription = processor_with_lm.batch_decode(logits.numpy()).text
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.17 %
## Training
The Common Voice hi `train`, `validation` were used for training, as well as openSLR hi `train`, `validation` and `test` datasets.
The script used for training can be found here [colab](https://colab.research.google.com/drive/1YL_csb3LRjqWybeyvQhZ-Hem2dtpvq_x?usp=sharing) |
tbosse/bert-base-german-cased-finetuned-subj_v4 | 30ed58de526feed10daf982f9b10e308b6d120f9 | 2022-04-07T17:54:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_v4 | 9 | null | transformers | 12,499 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v4
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3626
- Precision: 0.6308
- Recall: 0.4489
- F1: 0.5245
- Accuracy: 0.8579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 32 | 0.3626 | 0.6308 | 0.4489 | 0.5245 | 0.8579 |
| No log | 2.0 | 64 | 0.3626 | 0.6308 | 0.4489 | 0.5245 | 0.8579 |
| No log | 3.0 | 96 | 0.3626 | 0.6308 | 0.4489 | 0.5245 | 0.8579 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.