modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
โ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
โ | likes
float64 0
712
โ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lgodwangl/sent_conservative | 64dd1a8b13262bbde088de371e96e014f3ad5ad2 | 2022-05-13T01:25:39.000Z | [
"pytorch",
"perceiver",
"text-classification",
"transformers"
] | text-classification | false | lgodwangl | null | lgodwangl/sent_conservative | 3 | null | transformers | 22,400 | Entry not found |
Milanmg/bert-base-multilingual | 7fac3ff40f20c006544aa637a9dd2fe42f2b82f7 | 2022-05-13T05:06:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Milanmg | null | Milanmg/bert-base-multilingual | 3 | null | transformers | 22,401 | Entry not found |
jkhan447/language-detection-Bert-base-uncased | 5fded656790b4615a13edb78c9fd990727d6e685 | 2022-05-13T10:07:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jkhan447 | null | jkhan447/language-detection-Bert-base-uncased | 3 | null | transformers | 22,402 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: language-detection-Bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-Bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2231
- Accuracy: 0.9512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ceggian/sbert_pt_reddit_softmax_128 | 6b6df9cb98acda67e448cef24c0a92c775337a22 | 2022-05-13T04:46:30.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbert_pt_reddit_softmax_128 | 3 | null | sentence-transformers | 22,403 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
PSW/cnndm_0.1percent_minsimdel_seed1 | 83808fcd8979f638b8222cd6856e020a4080045f | 2022-05-15T08:29:49.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_minsimdel_seed1 | 3 | null | transformers | 22,404 | Entry not found |
manirai91/mbert-imdb | 4db5877d83aca7080285401dec749b61b635988f | 2022-05-13T11:30:08.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | manirai91 | null | manirai91/mbert-imdb | 3 | null | transformers | 22,405 | Entry not found |
peggyhuang/gpt2-qrecc | 0fc1cfc5b1a2f1b28e4d542a17c31e2a68a9c700 | 2022-05-13T12:20:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | peggyhuang | null | peggyhuang/gpt2-qrecc | 3 | null | transformers | 22,406 | Entry not found |
PSW/cnndm_0.1percent_minmaxswap_seed1 | 728c10525be12dfdc453024f3736f2b197685e6a | 2022-05-16T04:27:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_minmaxswap_seed1 | 3 | null | transformers | 22,407 | Entry not found |
peggyhuang/t5-base-qrecc | 83c13c95733a5c84a4d24ea8046cf921d420e5fe | 2022-05-13T16:23:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | peggyhuang | null | peggyhuang/t5-base-qrecc | 3 | null | transformers | 22,408 | Entry not found |
Dizzykong/gpt2-example | 8a8b56dc3ea9663cbdd5c352e416a71d8ccc0846 | 2022-05-13T18:21:36.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Dizzykong | null | Dizzykong/gpt2-example | 3 | null | transformers | 22,409 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-example
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ali-issa/1-wav2vec2-arabic-gpu-colab | 293bc8cbea13414c04848353c57e6205d9d8775e | 2022-05-14T04:42:53.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ali-issa | null | ali-issa/1-wav2vec2-arabic-gpu-colab | 3 | null | transformers | 22,410 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-arabic-gpu-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-arabic-gpu-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9957
- Wer: 0.4651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 10.9081 | 0.35 | 100 | 5.2386 | 1.0 |
| 3.2421 | 0.71 | 200 | 3.0640 | 1.0 |
| 2.9774 | 1.06 | 300 | 2.8992 | 1.0 |
| 2.8996 | 1.41 | 400 | 2.8520 | 1.0 |
| 2.4167 | 1.77 | 500 | 1.8349 | 1.0 |
| 1.312 | 2.12 | 600 | 1.1278 | 0.8972 |
| 1.0832 | 2.47 | 700 | 0.9189 | 0.7763 |
| 0.961 | 2.83 | 800 | 0.8498 | 0.7810 |
| 0.7067 | 3.18 | 900 | 0.7832 | 0.6911 |
| 0.6727 | 3.53 | 1000 | 0.7686 | 0.6817 |
| 0.6869 | 3.89 | 1100 | 0.7178 | 0.6606 |
| 0.5074 | 4.24 | 1200 | 0.7431 | 0.6371 |
| 0.5298 | 4.59 | 1300 | 0.7370 | 0.6477 |
| 0.522 | 4.95 | 1400 | 0.6710 | 0.6201 |
| 0.4282 | 5.3 | 1500 | 0.7460 | 0.6336 |
| 0.4448 | 5.65 | 1600 | 0.7355 | 0.5890 |
| 0.4706 | 6.01 | 1700 | 0.7612 | 0.6254 |
| 0.3831 | 6.36 | 1800 | 0.7308 | 0.6130 |
| 0.3997 | 6.71 | 1900 | 0.7083 | 0.5907 |
| 0.3968 | 7.07 | 2000 | 0.7919 | 0.5872 |
| 0.3613 | 7.42 | 2100 | 0.7489 | 0.5942 |
| 0.3746 | 7.77 | 2200 | 0.7261 | 0.5778 |
| 0.3557 | 8.13 | 2300 | 0.7527 | 0.5749 |
| 0.3539 | 8.48 | 2400 | 0.7762 | 0.5696 |
| 0.3771 | 8.83 | 2500 | 0.7640 | 0.5766 |
| 0.3148 | 9.19 | 2600 | 0.7394 | 0.5719 |
| 0.3617 | 9.54 | 2700 | 0.8017 | 0.5713 |
| 0.3852 | 9.89 | 2800 | 0.7286 | 0.5766 |
| 0.3065 | 10.25 | 2900 | 0.7963 | 0.6042 |
| 0.3454 | 10.6 | 3000 | 0.7685 | 0.5637 |
| 0.3704 | 10.95 | 3100 | 0.8003 | 0.5584 |
| 0.3243 | 11.31 | 3200 | 0.7872 | 0.5567 |
| 0.3438 | 11.66 | 3300 | 0.8140 | 0.5619 |
| 0.3227 | 12.01 | 3400 | 0.8192 | 0.5890 |
| 0.2794 | 12.37 | 3500 | 0.8573 | 0.5866 |
| 0.2941 | 12.72 | 3600 | 0.8054 | 0.5543 |
| 0.2828 | 13.07 | 3700 | 0.8168 | 0.5584 |
| 0.2638 | 13.43 | 3800 | 0.7975 | 0.5649 |
| 0.2503 | 13.78 | 3900 | 0.8714 | 0.5432 |
| 0.2357 | 14.13 | 4000 | 0.8058 | 0.5520 |
| 0.2332 | 14.49 | 4100 | 0.8636 | 0.5637 |
| 0.2423 | 14.84 | 4200 | 0.8774 | 0.5666 |
| 0.2128 | 15.19 | 4300 | 0.8882 | 0.5531 |
| 0.2151 | 15.55 | 4400 | 0.8291 | 0.5238 |
| 0.2152 | 15.9 | 4500 | 0.8529 | 0.5631 |
| 0.2039 | 16.25 | 4600 | 0.7924 | 0.5502 |
| 0.2052 | 16.61 | 4700 | 0.8515 | 0.5625 |
| 0.2054 | 16.96 | 4800 | 0.8428 | 0.5619 |
| 0.1872 | 17.31 | 4900 | 0.8507 | 0.5367 |
| 0.1795 | 17.67 | 5000 | 0.8774 | 0.5449 |
| 0.1939 | 18.02 | 5100 | 0.8555 | 0.5432 |
| 0.1667 | 18.37 | 5200 | 0.9200 | 0.5555 |
| 0.1894 | 18.73 | 5300 | 0.8407 | 0.5508 |
| 0.1773 | 19.08 | 5400 | 0.8522 | 0.5285 |
| 0.1671 | 19.43 | 5500 | 0.8925 | 0.5379 |
| 0.1651 | 19.79 | 5600 | 0.8111 | 0.5203 |
| 0.1647 | 20.14 | 5700 | 0.8529 | 0.5179 |
| 0.1588 | 20.49 | 5800 | 0.8181 | 0.5267 |
| 0.1626 | 20.85 | 5900 | 0.8150 | 0.5302 |
| 0.1385 | 21.2 | 6000 | 0.8691 | 0.5461 |
| 0.1483 | 21.55 | 6100 | 0.9188 | 0.5420 |
| 0.1572 | 21.91 | 6200 | 0.9482 | 0.5344 |
| 0.133 | 22.26 | 6300 | 0.9386 | 0.5443 |
| 0.1448 | 22.61 | 6400 | 0.9549 | 0.5314 |
| 0.1443 | 22.97 | 6500 | 0.8743 | 0.5332 |
| 0.14 | 23.32 | 6600 | 0.8278 | 0.5203 |
| 0.1476 | 23.67 | 6700 | 0.8949 | 0.5244 |
| 0.1597 | 24.03 | 6800 | 0.8842 | 0.5355 |
| 0.1402 | 24.38 | 6900 | 0.8334 | 0.5097 |
| 0.1459 | 24.73 | 7000 | 0.8227 | 0.5144 |
| 0.1268 | 25.09 | 7100 | 0.8873 | 0.5173 |
| 0.1294 | 25.44 | 7200 | 0.9022 | 0.5208 |
| 0.1238 | 25.79 | 7300 | 0.8525 | 0.5291 |
| 0.1213 | 26.15 | 7400 | 0.8545 | 0.5097 |
| 0.1298 | 26.5 | 7500 | 0.8704 | 0.5320 |
| 0.117 | 26.85 | 7600 | 0.8690 | 0.5021 |
| 0.1087 | 27.21 | 7700 | 0.8968 | 0.5203 |
| 0.1239 | 27.56 | 7800 | 0.8644 | 0.5244 |
| 0.1125 | 27.91 | 7900 | 0.9177 | 0.5238 |
| 0.1089 | 28.27 | 8000 | 0.9019 | 0.4997 |
| 0.1086 | 28.62 | 8100 | 0.8404 | 0.4956 |
| 0.1214 | 28.97 | 8200 | 0.9274 | 0.5026 |
| 0.1066 | 29.33 | 8300 | 0.9177 | 0.5079 |
| 0.1086 | 29.68 | 8400 | 0.9175 | 0.5191 |
| 0.0963 | 30.04 | 8500 | 0.9508 | 0.5009 |
| 0.1087 | 30.39 | 8600 | 0.9500 | 0.5344 |
| 0.1045 | 30.74 | 8700 | 0.9291 | 0.5244 |
| 0.1048 | 31.1 | 8800 | 0.9758 | 0.5250 |
| 0.1017 | 31.45 | 8900 | 0.9174 | 0.5150 |
| 0.108 | 31.8 | 9000 | 0.9436 | 0.5220 |
| 0.101 | 32.16 | 9100 | 0.8894 | 0.5126 |
| 0.0896 | 32.51 | 9200 | 0.9647 | 0.5126 |
| 0.0981 | 32.86 | 9300 | 0.9165 | 0.5179 |
| 0.0827 | 33.22 | 9400 | 0.8735 | 0.4932 |
| 0.0993 | 33.57 | 9500 | 0.9213 | 0.4909 |
| 0.0963 | 33.92 | 9600 | 0.8988 | 0.4915 |
| 0.0796 | 34.28 | 9700 | 0.9873 | 0.5150 |
| 0.0887 | 34.63 | 9800 | 0.9177 | 0.5120 |
| 0.0951 | 34.98 | 9900 | 0.9614 | 0.5015 |
| 0.0915 | 35.34 | 10000 | 0.9607 | 0.4962 |
| 0.0813 | 35.69 | 10100 | 0.9585 | 0.5115 |
| 0.086 | 36.04 | 10200 | 0.9877 | 0.4891 |
| 0.0773 | 36.4 | 10300 | 0.9349 | 0.4915 |
| 0.0755 | 36.75 | 10400 | 0.9216 | 0.4833 |
| 0.0776 | 37.1 | 10500 | 0.9947 | 0.4827 |
| 0.077 | 37.46 | 10600 | 0.9909 | 0.4868 |
| 0.0798 | 37.81 | 10700 | 0.9571 | 0.4938 |
| 0.0667 | 38.16 | 10800 | 1.0228 | 0.4927 |
| 0.0797 | 38.52 | 10900 | 1.0108 | 0.4909 |
| 0.0758 | 38.87 | 11000 | 0.9805 | 0.4950 |
| 0.0672 | 39.22 | 11100 | 1.0143 | 0.4792 |
| 0.0787 | 39.58 | 11200 | 0.9541 | 0.4762 |
| 0.076 | 39.93 | 11300 | 0.8923 | 0.4792 |
| 0.0781 | 40.28 | 11400 | 0.9096 | 0.4780 |
| 0.0631 | 40.64 | 11500 | 1.0085 | 0.4786 |
| 0.0659 | 40.99 | 11600 | 0.9783 | 0.4786 |
| 0.0697 | 41.34 | 11700 | 0.9591 | 0.4833 |
| 0.0569 | 41.7 | 11800 | 0.9853 | 0.4950 |
| 0.0616 | 42.05 | 11900 | 0.9841 | 0.4792 |
| 0.06 | 42.4 | 12000 | 0.9664 | 0.4733 |
| 0.0605 | 42.76 | 12100 | 0.9817 | 0.4756 |
| 0.0639 | 43.11 | 12200 | 0.9721 | 0.4715 |
| 0.0552 | 43.46 | 12300 | 1.0123 | 0.4739 |
| 0.0634 | 43.82 | 12400 | 0.9619 | 0.4750 |
| 0.0583 | 44.17 | 12500 | 0.9861 | 0.4680 |
| 0.0509 | 44.52 | 12600 | 0.9871 | 0.4674 |
| 0.046 | 44.88 | 12700 | 1.0072 | 0.4686 |
| 0.0516 | 45.23 | 12800 | 0.9809 | 0.4698 |
| 0.0518 | 45.58 | 12900 | 0.9685 | 0.4580 |
| 0.0542 | 45.94 | 13000 | 0.9710 | 0.4750 |
| 0.0487 | 46.29 | 13100 | 0.9816 | 0.4733 |
| 0.0422 | 46.64 | 13200 | 0.9670 | 0.4645 |
| 0.0475 | 47.0 | 13300 | 0.9802 | 0.4686 |
| 0.0452 | 47.35 | 13400 | 0.9812 | 0.4592 |
| 0.0445 | 47.7 | 13500 | 0.9928 | 0.4633 |
| 0.0463 | 48.06 | 13600 | 0.9883 | 0.4627 |
| 0.0411 | 48.41 | 13700 | 0.9941 | 0.4668 |
| 0.05 | 48.76 | 13800 | 0.9964 | 0.4680 |
| 0.0503 | 49.12 | 13900 | 0.9932 | 0.4698 |
| 0.0385 | 49.47 | 14000 | 0.9980 | 0.4645 |
| 0.0389 | 49.82 | 14100 | 0.9957 | 0.4651 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jtang9001/skynet_distilbert_1 | f1bb780df48c8b06d8ba57766f6ba3db9eadd0f4 | 2022-05-15T01:18:26.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | jtang9001 | null | jtang9001/skynet_distilbert_1 | 3 | null | transformers | 22,411 | Entry not found |
priansh/maeve-12-6-xsum | 91ff0beaa72a5c8af5198dfefb914bd458c24b96 | 2022-05-16T13:55:59.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:xsum",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | text2text-generation | false | priansh | null | priansh/maeve-12-6-xsum | 3 | null | transformers | 22,412 | ---
language:
- en
tags:
- text2text-generation
- pytorch
license: "gpl-3.0"
datasets:
- xsum
widget:
- text: "President Biden met with Russia's Putin over the weekend to discuss a ceasefire in Ukraine."
example_title: "Ukrainian Ceasefire"
- text: "Acme Ventures recently led a seed round to provide over $2MM in funding to Aiko Mail, an AI startup tackling email."
example_title: "VC Investment"
- text: "In a shocking move, Florida has decided to formally secede from the United States, opting to sink into the Atlantic Ocean."
example_title: "Florida secedes"
---
# Maeve - XSUM
Maeve is a language model that is similar to BART in structure but trained specially using a CAT (Conditionally Adversarial Transformer).
This allows the model to learn to create long-form text from short entries with high degrees of control and coherence that are impossible to achieve with traditional transformers.
This specific model has been trained on the XSUM dataset, and can invert summaries into full-length news articles. Feel free to try examples on the right!
|
DanDrai/distilbert-base-uncased-finetuned-imdb | 83ead6de8473d8c31f2ad03ade37c7388e86d2c6 | 2022-05-15T09:24:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DanDrai | null | DanDrai/distilbert-base-uncased-finetuned-imdb | 3 | null | transformers | 22,413 | Entry not found |
aliosm/sha3bor-poetry-diacritizer-canine-c | 26e603bb35156e0b0f274573e350a059343f9ff9 | 2022-05-28T09:41:39.000Z | [
"pytorch",
"canine",
"token-classification",
"ar",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | aliosm | null | aliosm/sha3bor-poetry-diacritizer-canine-c | 3 | null | transformers | 22,414 | ---
language: ar
license: mit
widget:
- text: "ุฅู ุงูุนููู ุงูุชู ูู ุทุฑููุง ุญูุฑ [ุดุทุฑ] ูุชูููุง ุซู
ูู
ูุญููู ูุชูุงูุง"
- text: "ุฅุฐุง ู
ุง ูุนูุช ุงูุฎูุฑ ุถูุนู ุดุฑูู
[ุดุทุฑ] ููู ุฅูุงุก ุจุงูุฐู ููู ููุถุญ"
- text: "ูุงุญุฑ ููุจุงู ู
ู
ู ููุจู ุดุจู
[ุดุทุฑ] ูู
ู ุจุฌุณู
ู ูุญุงูู ุนูุฏู ุณูู
"
---
|
aliosm/sha3bor-general-diacritizer-canine-s | 22c4562078e558849a8ac8920e6f7ae77ba9662e | 2022-05-28T09:41:34.000Z | [
"pytorch",
"canine",
"token-classification",
"ar",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | aliosm | null | aliosm/sha3bor-general-diacritizer-canine-s | 3 | null | transformers | 22,415 | ---
language: ar
license: mit
widget:
- text: "ุชูููุช ูู ุฑุฒูู ุนูู ุงููู ุฎุงููู ูุฃูููุช ุฃู ุงููู ูุง ุดู ุฑุงุฒูู."
- text: "ุฃู ุดุฎุต ูุชููู ุนู ุงูุชุนูู
ูู ุนุฌูุฒุ ุณูุงุก ูุงู ูู ุงูุนุดุฑูู ุฃู ุงูุซู
ุงููู."
- text: "ุงูุญูุงุฉ ุฑูุงูุฉ ุฌู
ููุฉ ุนููู ูุฑุงุกุชูุง ุญุชู ุงูููุงูุฉุ ูุง ุชุชููู ุฃุจุฏุง ุนูุฏ ุณุทุฑ ุญุฒูู ูุฏ ุชููู ุงูููุงูุฉ ุฌู
ููุฉ."
---
|
PSW/cnndm_0.1percent_minsimins_seed42 | 4d07118bb870772b8ab36e73595fed434a4f27dc | 2022-05-15T20:46:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_minsimins_seed42 | 3 | null | transformers | 22,416 | Entry not found |
khalidalt/all-mpnet-base-v2-tasky-classification | 4ab309a6cbcc877aa9f4f65af04c9bf1db8b4f65 | 2022-05-15T22:16:03.000Z | [
"pytorch",
"mpnet",
"text-classification",
"transformers"
] | text-classification | false | khalidalt | null | khalidalt/all-mpnet-base-v2-tasky-classification | 3 | null | transformers | 22,417 | ---
widget:
- text: "Satellites chart unlit territory and poverty hotspots."
---
# all-mpnet-base-v2-tasky-classification |
CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_42 | 0b070d87aa674b04b6a9a428e2a8fce83d9eea48 | 2022-05-15T23:06:54.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_42 | 3 | null | transformers | 22,418 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_88 | 4b9281e9a6436736d02bee16cd7c9932623c56fb | 2022-05-16T00:12:10.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_88 | 3 | null | transformers | 22,419 | Entry not found |
CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_88 | c1a9c9111b5fdac7328e0c9c8c087c63658bbaf7 | 2022-05-16T00:21:26.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_88 | 3 | null | transformers | 22,420 | Entry not found |
CEBaB/t5-base.CEBaB.absa.inclusive.seed_77 | ece39d4a8824409a5612f8011040314f576530db | 2022-05-16T01:42:51.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.absa.inclusive.seed_77 | 3 | null | transformers | 22,421 | Entry not found |
PSW/cnndm_0.1percent_max2swap_seed27 | af5ca15c357e5c6921dd1e43ab1e4cc75aff1791 | 2022-05-16T12:17:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_max2swap_seed27 | 3 | null | transformers | 22,422 | Entry not found |
huawei-noah/AutoTinyBERT-S2 | c842a7d061a02223e8207877bd53a82654cfb84a | 2022-05-16T14:52:36.000Z | [
"pytorch",
"transformers",
"license:other"
] | null | false | huawei-noah | null | huawei-noah/AutoTinyBERT-S2 | 3 | null | transformers | 22,423 | ---
license: other
---
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base).
AutoTinyBERT provides a model zoo that can meet different latency requirements. |
oeg/roberta-finetuned-CPV_Spanish | 2fa4faa439364ff2c06dfde576047ba2d993fb04 | 2022-05-17T09:15:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | oeg | null | oeg/roberta-finetuned-CPV_Spanish | 3 | null | transformers | 22,424 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: roberta-finetuned-CPV_Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-CPV_Spanish
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on a dataset derived from Spanish Public Procurement documents from 2019. The whole fine-tuning process is available in the following [Kaggle notebook](https://www.kaggle.com/code/marianavasloro/fine-tuned-roberta-for-spanish-cpv-codes).
It achieves the following results on the evaluation set:
- Loss: 0.0465
- F1: 0.7918
- Roc Auc: 0.8860
- Accuracy: 0.7376
- Coverage Error: 10.2744
- Label Ranking Average Precision Score: 0.7973
## Intended uses & limitations
This model only predicts the first two digits of the CPV codes. The list of divisions CPV codes is the following:
| Division | English | Spanish | | | |
|----------|:----------------------------------------------------------------------------------------------------------------:|----------------------------------------------------------------------------------------------------------------------------------------------------|:-:|:-:|:-:|
| 03 | Agricultural, farming, fishing, forestry and related products | Productos de la agricultura, ganaderรญa, pesca, silvicultura y productos afines | | | |
| 09 | Petroleum products, fuel, electricity and other sources of energy | Derivados del petrรณleo, combustibles, electricidad y otras fuentes de energรญa | | | |
| 14 | Mining, basic metals and related products | Productos de la minerรญa, de metales de base y productos afines | | | |
| 15 | Food, beverages, tobacco and related products | Alimentos, bebidas, tabaco y productos afines | | | |
| 16 | Agricultural machinery | Maquinaria agrรญcola | | | |
| 18 | Clothing, footwear, luggage articles and accessories | Prendas de vestir, calzado, artรญculos de viaje y accesorios | | | |
| 19 | Leather and textile fabrics, plastic and rubber materials | Piel y textiles, materiales de plรกstico y caucho | | | |
| 22 | Printed matter and related products | Impresos y productos relacionados | | | |
| 24 | Chemical products | Productos quรญmicos | | | |
| 30 | Office and computing machinery, equipment and supplies except furniture and software packages | Mรกquinas, equipo y artรญculos de oficina y de informรกtica, excepto mobiliario y paquetes de software | | | |
| 31 | Electrical machinery, apparatus, equipment and consumables; lighting | Mรกquinas, aparatos, equipo y productos consumibles elรฉctricos; iluminaciรณn | | | |
| 32 | Radio, television, communication, telecommunication and related equipment | Equipos de radio, televisiรณn, comunicaciones y telecomunicaciones y equipos conexos | | | |
| 33 | Medical equipments, pharmaceuticals and personal care products | Equipamiento y artรญculos mรฉdicos, farmacรฉuticos y de higiene personal | | | |
| 34 | Transport equipment and auxiliary products to transportation | Equipos de transporte y productos auxiliares | | | |
| 35 | Security, fire | Equipo de seguridad, extinciรณn de incendios, policรญa y defensa | | | |
| 37 | Musical instruments, sport goods, games, toys, handicraft, art materials and accessories | Instrumentos musicales, artรญculos deportivos, juegos, juguetes, artรญculos de artesanรญa, materiales artรญsticos y accesorios | | | |
| 38 | Laboratory, optical and precision equipments (excl. glasses) | Equipo de laboratorio, รณptico y de precisiรณn (excepto gafas) | | | |
| 39 | Furniture (incl. office furniture), furnishings, domestic appliances (excl. lighting) and cleaning products | Mobiliario (incluido el de oficina), complementos de mobiliario, aparatos electrodomรฉsticos (excluida la iluminaciรณn) y productos de limpieza | | | |
| 41 | Collected and purified water | Agua recogida y depurada | | | |
| 42 | Industrial machinery | Maquinaria industrial | | | |
| 43 | Machinery for mining, quarrying, construction equipment | Maquinaria para la minerรญa y la explotaciรณn de canteras y equipo de construcciรณn | | | |
| 44 | Construction structures and materials; auxiliary products to construction (except electric apparatus) | Estructuras y materiales de construcciรณn; productos auxiliares para la construcciรณn (excepto aparatos elรฉctricos) | | | |
| 45 | Construction work | Trabajos de construcciรณn | | | |
| 48 | Software package and information systems | Paquetes de software y sistemas de informaciรณn | | | |
| 50 | Repair and maintenance services | Servicios de reparaciรณn y mantenimiento | | | |
| 51 | Installation services (except software) | Servicios de instalaciรณn (excepto software) | | | |
| 55 | Hotel, restaurant and retail trade services | Servicios comerciales al por menor de hostelerรญa y restauraciรณn | | | |
| 60 | Transport services (excl. Waste transport) | Servicios de transporte (excluido el transporte de residuos) | | | |
| 63 | Supporting and auxiliary transport services; travel agencies services | Servicios de transporte complementarios y auxiliares; servicios de agencias de viajes | | | |
| 64 | Postal and telecommunications services | Servicios de correos y telecomunicaciones | | | |
| 65 | Public utilities | Servicios pรบblicos | | | |
| 66 | Financial and insurance services | Servicios financieros y de seguros | | | |
| 70 | Real estate services | Servicios inmobiliarios | | | |
| 71 | Architectural, construction, engineering and inspection services | Servicios de arquitectura, construcciรณn, ingenierรญa e inspecciรณn | | | |
| 72 | IT services: consulting, software development, Internet and support | Servicios TI: consultorรญa, desarrollo de software, Internet y apoyo | | | |
| 73 | Research and development services and related consultancy services | Servicios de investigaciรณn y desarrollo y servicios de consultorรญa conexos | | | |
| 75 | Administration, defence and social security services | Servicios de administraciรณn pรบblica, defensa y servicios de seguridad social | | | |
| 76 | Services related to the oil and gas industry | Servicios relacionados con la industria del gas y del petrรณleo | | | |
| 77 | Agricultural, forestry, horticultural, aquacultural and apicultural services | Servicios agrรญcolas, forestales, hortรญcolas, acuรญcolas y apรญcolas | | | |
| 79 | Business services: law, marketing, consulting, recruitment, printing and security | Servicios a empresas: legislaciรณn, mercadotecnia, asesorรญa, selecciรณn de personal, imprenta y seguridad | | | |
| 80 | Education and training services | Servicios de enseรฑanza y formaciรณn | | | |
| 85 | Health and social work services | Servicios de salud y asistencia social | | | |
| 90 | Sewage, refuse, cleaning and environmental services | Servicios de alcantarillado, basura, limpieza y medio ambiente | | | |
| 92 | Recreational, cultural and sporting services | Servicios de esparcimiento, culturales y deportivos | | | |
| 98 | Other community, social and personal services | Otros servicios comunitarios, sociales o personales | | | |
## Training and evaluation data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Coverage Error | Label Ranking Average Precision Score |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|:--------------:|:-------------------------------------:|
| 0.0354 | 1.0 | 9054 | 0.0362 | 0.7560 | 0.8375 | 0.6963 | 14.0835 | 0.7357 |
| 0.0311 | 2.0 | 18108 | 0.0331 | 0.7756 | 0.8535 | 0.7207 | 12.7880 | 0.7633 |
| 0.0235 | 3.0 | 27162 | 0.0333 | 0.7823 | 0.8705 | 0.7283 | 11.5179 | 0.7811 |
| 0.0157 | 4.0 | 36216 | 0.0348 | 0.7821 | 0.8699 | 0.7274 | 11.5836 | 0.7798 |
| 0.011 | 5.0 | 45270 | 0.0377 | 0.7799 | 0.8787 | 0.7239 | 10.9173 | 0.7841 |
| 0.008 | 6.0 | 54324 | 0.0395 | 0.7854 | 0.8787 | 0.7309 | 10.9042 | 0.7879 |
| 0.0042 | 7.0 | 63378 | 0.0421 | 0.7872 | 0.8823 | 0.7300 | 10.5687 | 0.7903 |
| 0.0025 | 8.0 | 72432 | 0.0439 | 0.7884 | 0.8867 | 0.7305 | 10.2220 | 0.7934 |
| 0.0015 | 9.0 | 81486 | 0.0456 | 0.7889 | 0.8872 | 0.7316 | 10.1781 | 0.7945 |
| 0.001 | 10.0 | 90540 | 0.0465 | 0.7918 | 0.8860 | 0.7376 | 10.2744 | 0.7973 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
### Aknowledgments
This work has been supported by NextProcurement European Action (grant agreement INEA/CEF/ICT/A2020/2373713-Action 2020-ES-IA-0255) and the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with Universidad Politรฉcnica de Madrid in the line Support for R&D projects for Beatriz Galindo researchers, in the context of the V PRICIT (Regional Programme of Research and Technological Innovation). We also acknowledge the participation of Jennifer Tabita for the preparation of the initial set of notebooks, and the AI4Gov master students from the first cohort for their validation of the approach. Source of the data: Ministerio de Hacienda. |
waboucay/camembert-base-finetuned-xnli_fr_3_classes | 922858c5361d8d7196ad23795445599f463eeb6b | 2022-05-17T09:03:03.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
] | text-classification | false | waboucay | null | waboucay/camembert-base-finetuned-xnli_fr_3_classes | 3 | null | transformers | 22,425 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 81.6 | 81.6 |
| test | 82.2 | 82.3 |
|
ankitkupadhyay/mt5-small-finetuned-multilingual-xlsum-new | b7ef0817037421b2e95cccccf887655514d3bf8a | 2022-05-18T14:34:59.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"multilingual model",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ankitkupadhyay | null | ankitkupadhyay/mt5-small-finetuned-multilingual-xlsum-new | 3 | null | transformers | 22,426 | ---
license: apache-2.0
tags:
- multilingual model
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-multilingual-xlsum-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-multilingual-xlsum-new
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the 45 languages of the XL-Sum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7679
- Rouge1: 9.1993
- Rouge2: 2.3416
- Rougel: 7.6684
- Rougelsum: 7.7074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.9684 | 1.0 | 1687 | 2.8902 | 8.0531 | 1.8357 | 6.7234 | 6.7401 |
| 3.62 | 2.0 | 3374 | 2.8486 | 8.4881 | 2.0178 | 7.0542 | 7.0854 |
| 3.3765 | 3.0 | 5061 | 2.7986 | 8.7796 | 2.2342 | 7.3363 | 7.3645 |
| 3.5043 | 4.0 | 6748 | 2.7677 | 9.0486 | 2.3099 | 7.5493 | 7.5685 |
| 3.338 | 5.0 | 8435 | 2.7679 | 9.1993 | 2.3416 | 7.6684 | 7.7074 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
anton-l/emformer-base-librispeech | fdee6502eec45b1d37e46018572d595157d74454 | 2022-05-20T13:45:21.000Z | [
"pytorch",
"emformer",
"transformers",
"license:apache-2.0"
] | null | false | anton-l | null | anton-l/emformer-base-librispeech | 3 | null | transformers | 22,427 | ---
license: apache-2.0
---
|
PSW/cnndm_0.5percent_minmaxswap_seed27 | 8276b955c9ae534a7ba7b699a52c7e0af8cb1001 | 2022-05-17T16:43:31.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_minmaxswap_seed27 | 3 | null | transformers | 22,428 | Entry not found |
PSW/cnndm_0.5percent_minmaxswap_seed42 | a55e50f6aac83ec055b06665fc7dc6a358998707 | 2022-05-17T17:55:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_minmaxswap_seed42 | 3 | null | transformers | 22,429 | Entry not found |
CEBaB/gpt2.CEBaB.absa.exclusive.seed_99 | 0461b1967fc39247d1054beed295dce1109de27c | 2022-05-17T20:49:45.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.exclusive.seed_99 | 3 | null | transformers | 22,430 | Entry not found |
Dizzykong/gpt2-medium-commands-chunked | 7aa75047f202d4f647a4b97b42a440253d76c921 | 2022-05-18T00:20:56.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | Dizzykong | null | Dizzykong/gpt2-medium-commands-chunked | 3 | null | transformers | 22,431 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-medium-commands-chunked
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-commands-chunked
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
CEBaB/gpt2.CEBaB.absa.inclusive.seed_77 | 513edd1dfc376adcdda52b0ba1e397c41577841c | 2022-05-18T00:20:29.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.absa.inclusive.seed_77 | 3 | null | transformers | 22,432 | Entry not found |
olpa/distilbert-base-uncased-finetuned-emotion-olpa | dce034ead90fe249bbb9fe23dc4101ddc12b8995 | 2022-05-19T03:57:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | olpa | null | olpa/distilbert-base-uncased-finetuned-emotion-olpa | 3 | null | transformers | 22,433 | Entry not found |
EddieChen372/distilbert-base-uncased-finetuned-imdb | 9e230d2555a152cb42d507f1f0b1bcc8382478c3 | 2022-05-18T14:07:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | EddieChen372 | null | EddieChen372/distilbert-base-uncased-finetuned-imdb | 3 | null | transformers | 22,434 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7117 | 1.0 | 157 | 2.4976 |
| 2.5773 | 2.0 | 314 | 2.4243 |
| 2.5263 | 3.0 | 471 | 2.4348 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huggingtweets/lightcrypto-sergeynazarov | 9e90080d2034d2a386f649a7b317da61de90eccf | 2022-05-19T03:37:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lightcrypto-sergeynazarov | 3 | null | transformers | 22,435 | ---
language: en
thumbnail: http://www.huggingtweets.com/lightcrypto-sergeynazarov/1652931465147/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/751118197126991873/eSXubsCD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1478019214212747264/LZmNClhs_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sergey Nazarov & light</div>
<div style="text-align: center; font-size: 14px;">@lightcrypto-sergeynazarov</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sergey Nazarov & light.
| Data | Sergey Nazarov | light |
| --- | --- | --- |
| Tweets downloaded | 718 | 3237 |
| Retweets | 162 | 367 |
| Short tweets | 11 | 405 |
| Tweets kept | 545 | 2465 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/pe3nb090/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lightcrypto-sergeynazarov's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1am840oh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1am840oh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lightcrypto-sergeynazarov')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ceggian/bart_post_trained_reddit_batch32 | 2b6a15e07795cc871dc81855a5780c869a68bdb7 | 2022-05-30T10:53:46.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ceggian | null | ceggian/bart_post_trained_reddit_batch32 | 3 | null | transformers | 22,436 | Entry not found |
emmyapi/test | 834f33e13e3f2a9b136ba84121d1ca7a32099e62 | 2022-05-19T08:52:24.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:billsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | emmyapi | null | emmyapi/test | 3 | null | transformers | 22,437 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 495 | 2.5585 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Matthijs/deeplabv3-mobilevit-x-small | de43fda9cfbb7f345d8e8f00513f672a4f7571b4 | 2022-05-19T12:57:42.000Z | [
"pytorch",
"mobilevit",
"transformers"
] | null | false | Matthijs | null | Matthijs/deeplabv3-mobilevit-x-small | 3 | null | transformers | 22,438 | Entry not found |
Matthijs/deeplabv3-mobilevit-xx-small | eb35695b737a6dc9d5f3c6ba150a717c45fb5db1 | 2022-05-19T12:59:22.000Z | [
"pytorch",
"mobilevit",
"transformers"
] | null | false | Matthijs | null | Matthijs/deeplabv3-mobilevit-xx-small | 3 | null | transformers | 22,439 | Entry not found |
jjezabek/bert-base-uncased-sst_bin | 5fd77cd8a6f41fcc3733075454c2b8d2429ec9a6 | 2022-05-19T19:47:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jjezabek | null | jjezabek/bert-base-uncased-sst_bin | 3 | null | transformers | 22,440 | Entry not found |
EMBO/bert-base-cased_NER-task | 7eae63aba7c9f941f00e6a4f50730644f50722b6 | 2022-05-20T13:16:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | EMBO | null | EMBO/bert-base-cased_NER-task | 3 | null | transformers | 22,441 | Entry not found |
umangchaudhry/bert-emotion | 1fef26b61fac3e8a33a0380de6be2effa96588e8 | 2022-05-20T16:56:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | umangchaudhry | null | umangchaudhry/bert-emotion | 3 | null | transformers | 22,442 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- precision
- recall
model-index:
- name: bert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: Precision
type: precision
value: 0.7081377380103309
- name: Recall
type: recall
value: 0.709386945441909
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2350
- Precision: 0.7081
- Recall: 0.7094
- Fscore: 0.7082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8442 | 1.0 | 815 | 0.8653 | 0.7642 | 0.6192 | 0.6363 |
| 0.5488 | 2.0 | 1630 | 0.9330 | 0.7116 | 0.6838 | 0.6912 |
| 0.2713 | 3.0 | 2445 | 1.2350 | 0.7081 | 0.7094 | 0.7082 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
schoenml/bert-emotion | 5de575e6607ea4975686d15505a4b2abd820841d | 2022-05-23T15:19:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | schoenml | null | schoenml/bert-emotion | 3 | null | transformers | 22,443 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- precision
- recall
model-index:
- name: bert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: Precision
type: precision
value: 0.7295622072449084
- name: Recall
type: recall
value: 0.7265951560457381
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1531
- Precision: 0.7296
- Recall: 0.7266
- Fscore: 0.7278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8418 | 1.0 | 815 | 0.8129 | 0.7960 | 0.6242 | 0.6420 |
| 0.5222 | 2.0 | 1630 | 0.9663 | 0.7584 | 0.7196 | 0.7324 |
| 0.2662 | 3.0 | 2445 | 1.1531 | 0.7296 | 0.7266 | 0.7278 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
miyagawaorj/distilbert-base-uncased-finetuned-clinc | 5435e747f8f8c4c773184ffd5d9cc328939b26ec | 2022-06-06T14:04:54.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | miyagawaorj | null | miyagawaorj/distilbert-base-uncased-finetuned-clinc | 3 | null | transformers | 22,444 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9474193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2454
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.496 | 1.0 | 954 | 1.8019 | 0.8306 |
| 1.0663 | 2.0 | 1908 | 0.5690 | 0.9174 |
| 0.3267 | 3.0 | 2862 | 0.3128 | 0.9406 |
| 0.1397 | 4.0 | 3816 | 0.2567 | 0.9445 |
| 0.0846 | 5.0 | 4770 | 0.2454 | 0.9474 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
ey211/mt5-base-finetuned-dimensions-polisci | 3ed3ffa2a56e9d5c70537801725b19a3c99712aa | 2022-05-20T23:55:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | ey211 | null | ey211/mt5-base-finetuned-dimensions-polisci | 3 | null | transformers | 22,445 | ---
license: apache-2.0
---
|
AnonymousSub/rule_based_roberta_hier_triplet_shuffled_epochs_1_shard_1_squad2.0 | 4f3170b88c2138125af64f0289ed61fdfc936379 | 2022-05-21T19:08:25.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_shuffled_epochs_1_shard_1_squad2.0 | 3 | null | transformers | 22,446 | Entry not found |
brever/dummy | cd16a08c084d43a9c9968aaab48b09943e4c3688 | 2022-05-21T21:32:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | brever | null | brever/dummy | 3 | null | transformers | 22,447 | #DUmmy model |
eslamxm/mt5-base-finetuned-arur | 1e1758d054de97be3c3034ca7247695663452d8f | 2022-05-22T22:54:44.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"arabic",
"ar",
"ur",
"urdu",
"Abstractive Summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/mt5-base-finetuned-arur | 3 | null | transformers | 22,448 | ---
license: apache-2.0
tags:
- summarization
- arabic
- ar
- ur
- urdu
- mt5
- Abstractive Summarization
- generated_from_trainer
model-index:
- name: mt5-base-finetuned-ar-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-ar-fa
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0303
- Rouge-1: 26.73
- Rouge-2: 12.63
- Rouge-l: 23.96
- Gen Len: 18.99
- Bertscore: 71.41
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 3.7736 | 1.0 | 3287 | 3.2308 | 24.22 | 10.11 | 21.46 | 18.99 | 70.69 |
| 3.3783 | 2.0 | 6574 | 3.1283 | 25.28 | 10.9 | 22.43 | 18.99 | 71.02 |
| 3.2351 | 3.0 | 9861 | 3.0693 | 25.77 | 11.36 | 22.93 | 19.0 | 71.2 |
| 3.1363 | 4.0 | 13148 | 3.0421 | 25.88 | 11.57 | 23.08 | 18.99 | 71.22 |
| 3.0669 | 5.0 | 16435 | 3.0303 | 26.25 | 11.84 | 23.44 | 18.99 | 71.39 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ddrmaster1000/DialoGPT-medium-rick | 32684d3aa75056732bdca5368de47ce43219d03b | 2022-05-22T17:00:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ddrmaster1000 | null | ddrmaster1000/DialoGPT-medium-rick | 3 | null | transformers | 22,449 | ---
tags:
- conversational
---
# A poor discussion bot of Rick from Rick and Morty/ |
fujiki/t5-efficient-base_en2ja | fa98b2060e8a387239747d9fc7b82e69415a8979 | 2022-05-22T14:39:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | fujiki | null | fujiki/t5-efficient-base_en2ja | 3 | null | transformers | 22,450 | Entry not found |
renjithks/layoutlmv3-cord-ner | cd29319ed7cbd68d59db663042e15d2fb2b8aca1 | 2022-05-22T16:53:10.000Z | [
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | renjithks | null | renjithks/layoutlmv3-cord-ner | 3 | null | transformers | 22,451 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-cord-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-cord-ner
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1215
- Precision: 0.9448
- Recall: 0.9520
- F1: 0.9484
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 113 | 0.1771 | 0.8485 | 0.8925 | 0.8700 | 0.9393 |
| No log | 2.0 | 226 | 0.1584 | 0.8915 | 0.9146 | 0.9029 | 0.9524 |
| No log | 3.0 | 339 | 0.1153 | 0.9160 | 0.9309 | 0.9234 | 0.9686 |
| No log | 4.0 | 452 | 0.1477 | 0.9110 | 0.9136 | 0.9123 | 0.9592 |
| 0.1562 | 5.0 | 565 | 0.0861 | 0.9363 | 0.9443 | 0.9403 | 0.9741 |
| 0.1562 | 6.0 | 678 | 0.1165 | 0.9109 | 0.9415 | 0.9259 | 0.9673 |
| 0.1562 | 7.0 | 791 | 0.1280 | 0.9278 | 0.9367 | 0.9322 | 0.9707 |
| 0.1562 | 8.0 | 904 | 0.1122 | 0.9462 | 0.9453 | 0.9458 | 0.9762 |
| 0.0224 | 9.0 | 1017 | 0.1265 | 0.9431 | 0.9539 | 0.9485 | 0.9771 |
| 0.0224 | 10.0 | 1130 | 0.1215 | 0.9448 | 0.9520 | 0.9484 | 0.9762 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mrm8488/t5-small-finetuned-squad-qgen | 4d8d71e3eead33b0c7e86f59150fbfc91ed69925 | 2022-05-22T20:27:01.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-squad-qgen | 3 | null | transformers | 22,452 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- f1
model-index:
- name: t5-small-finetuned-squad-qgen
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
args: plain_text
metrics:
- name: F1
type: f1
value: 0.36430000148070146
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-squad-qgen
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3805
- Em: 0.0
- F1: 0.3643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Em | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---:|:------:|
| 0.4449 | 1.0 | 2738 | 0.3992 | 0.0 | 0.3859 |
| 0.4206 | 2.0 | 5476 | 0.3886 | 0.0 | 0.3781 |
| 0.4092 | 3.0 | 8214 | 0.3837 | 0.0 | 0.3827 |
| 0.4055 | 4.0 | 10952 | 0.3809 | 0.0 | 0.3747 |
| 0.3997 | 5.0 | 13690 | 0.3805 | 0.0 | 0.3643 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
krotima1/mbart-at2h-c | e6dd707bd7b1517dc9f3480e7af0d6bef78b9246 | 2022-05-23T20:30:19.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"cs",
"dataset:private Czech News Center dataset news-based",
"transformers",
"abstractive summarization",
"mbart-cc25",
"Czech",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | krotima1 | null | krotima1/mbart-at2h-c | 3 | null | transformers | 22,453 | ---
language:
- cs
- cs
tags:
- abstractive summarization
- mbart-cc25
- Czech
license: apache-2.0
datasets:
- private Czech News Center dataset news-based
metrics:
- rouge
- rougeraw
---
# mBART fine-tuned model for Czech abstractive summarization (AT2H-C)
This model is a fine-tuned checkpoint of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the Czech news dataset to produce Czech abstractive summaries.
## Task
The model deals with the task ``Abstract + Text to Headline`` (AT2H) which consists in generating a one- or two-sentence summary considered as a headline from a Czech news text.
## Dataset
The model has been trained on the private CNC dataset provided by Czech News Center. The dataset includes 3/4M Czech news-based documents consisting of a Headline, Abstract, and Full-text sections. Truncation and padding were set to 512 tokens for the encoder and 64 for the decoder.
## Training
The model has been trained on 4x NVIDIA Tesla V100 32GB for 15 hours, 4x NVIDIA Tesla A100 40GB for 10 hours, and 1x NVIDIA Tesla A100 40GB for 20 hours. During training, the model has seen 5984K documents corresponding to roughly 9 epochs.
# Use
Assuming that you are using the provided Summarizer.ipynb file.
```python
def summ_config():
cfg = OrderedDict([
# summarization model - checkpoint from website
("model_name", "krotima1/mbart-at2h-c"),
("inference_cfg", OrderedDict([
("num_beams", 4),
("top_k", 40),
("top_p", 0.92),
("do_sample", True),
("temperature", 0.89),
("repetition_penalty", 1.2),
("no_repeat_ngram_size", None),
("early_stopping", True),
("max_length", 64),
("min_length", 10),
])),
#texts to summarize
("text",
[
"Input your Czech text",
]
),
])
return cfg
cfg = summ_config()
#load model
model = AutoModelForSeq2SeqLM.from_pretrained(cfg["model_name"])
tokenizer = AutoTokenizer.from_pretrained(cfg["model_name"])
# init summarizer
summarize = Summarizer(model, tokenizer, cfg["inference_cfg"])
summarize(cfg["text"])
``` |
globuslabs/ScholarBERT-XL | f64f197b4eff4d9dc77e0753bb65300c7b4c63c0 | 2022-05-24T03:15:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"arxiv:2205.11342",
"transformers",
"science",
"multi-displinary",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | globuslabs | null | globuslabs/ScholarBERT-XL | 3 | null | transformers | 22,454 | ---
language: en
tags:
- science
- multi-displinary
license: apache-2.0
---
# ScholarBERT-XL_100 Model
This is the **ScholarBERT-XL_100** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**221B tokens**).
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model has a total of 770M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 36 |
| Hidden Size | 1280 |
| Attention Heads | 20 |
| Total Parameters | 770M |
# Training Dataset
The vocab and the model are pertrained on **100% of the PRD** scientific literature dataset.
The PRD dataset is provided by Public.Resource.Org, Inc. (โPublic Resourceโ),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2022scholarbert,
doi = {10.48550/ARXIV.2205.11342},
url = {https://arxiv.org/abs/2205.11342},
author = {Hong, Zhi and Ajith, Aswathy and Pauloski, Gregory and Duede, Eamon and Malamud, Carl and Magoulas, Roger and Chard, Kyle and Foster, Ian},
title = {ScholarBERT: Bigger is Not Always Better},
publisher = {arXiv},
year = {2022}
}
``` |
globuslabs/ScholarBERT_100_WB | 148bc7163447475adef961d9c0533ab198b347f9 | 2022-05-24T03:17:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"arxiv:2205.11342",
"transformers",
"science",
"multi-displinary",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | globuslabs | null | globuslabs/ScholarBERT_100_WB | 3 | null | transformers | 22,455 | ---
language: en
tags:
- science
- multi-displinary
license: apache-2.0
---
# ScholarBERT_100_WB Model
This is the **ScholarBERT_100_WB** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**221B tokens**).
Additionally, the pretraining data also includes the Wikipedia+BookCorpus, which are used to pretrain the [BERT-base](https://huggingface.co/bert-base-cased) and [BERT-large](https://huggingface.co/bert-large-cased) models.
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 24 |
| Hidden Size | 1024 |
| Attention Heads | 16 |
| Total Parameters | 340M |
# Training Dataset
The vocab and the model are pertrained on **100% of the PRD** scientific literature dataset and the Wikipedia+BookCorpus.
The PRD dataset is provided by Public.Resource.Org, Inc. (โPublic Resourceโ),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2022scholarbert,
doi = {10.48550/ARXIV.2205.11342},
url = {https://arxiv.org/abs/2205.11342},
author = {Hong, Zhi and Ajith, Aswathy and Pauloski, Gregory and Duede, Eamon and Malamud, Carl and Magoulas, Roger and Chard, Kyle and Foster, Ian},
title = {ScholarBERT: Bigger is Not Always Better},
publisher = {arXiv},
year = {2022}
}
``` |
PSW/samsum_percent10_minsimins | fa7735659fb17ff8a2adce19f745ba0f55282283 | 2022-05-23T03:03:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_percent10_minsimins | 3 | null | transformers | 22,456 | Entry not found |
prodm93/gpt2-rn-abstract-model-v1 | fbdc81ba0fef2f6b3d96d05cd51a16b5becb0d26 | 2022-05-23T05:45:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prodm93 | null | prodm93/gpt2-rn-abstract-model-v1 | 3 | null | transformers | 22,457 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.ambiance__food.2-class.exclusive.seed_42 | 80641e40ce0de04af4f3b46c74b906a3236ac88a | 2022-05-24T12:07:34.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.ambiance__food.2-class.exclusive.seed_42 | 3 | null | transformers | 22,458 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.service__food.2-class.exclusive.seed_43 | 86783dfaa7bd58100bb57aedcafe05d48d9e821d | 2022-05-24T10:03:28.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.service__food.2-class.exclusive.seed_43 | 3 | null | transformers | 22,459 | Entry not found |
CEBaB/gpt2.CEBaB.causalm.service__food.2-class.exclusive.seed_44 | 97a7517df40701a7cc7286b3149fad73b8e20cb6 | 2022-05-24T10:05:09.000Z | [
"pytorch",
"gpt2_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/gpt2.CEBaB.causalm.service__food.2-class.exclusive.seed_44 | 3 | null | transformers | 22,460 | Entry not found |
AnonymousSub/rule_based_roberta_hier_triplet_shuffled_sents_epochs_1_shard_1_squad2.0 | 9e95d3a09b16c7c32c4368dba7c1ff88da474f94 | 2022-05-23T19:53:48.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_shuffled_sents_epochs_1_shard_1_squad2.0 | 3 | null | transformers | 22,461 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.causalm.noise__food.5-class.exclusive.seed_45 | 4e847502f94296f1a44e03aa4d94850081889993 | 2022-05-24T10:10:00.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.noise__food.5-class.exclusive.seed_45 | 3 | null | transformers | 22,462 | Entry not found |
KoichiYasuoka/deberta-small-japanese-luw-upos | d6a07913d5ff39d68627ebf88d6635fc470f3dd5 | 2022-05-24T06:33:06.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/deberta-small-japanese-luw-upos | 3 | null | transformers | 22,463 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"
---
# deberta-small-japanese-luw-upos
## Model Description
This is a DeBERTa(V2) model pre-trained on ้็ฉบๆๅบซ texts for POS-tagging and dependency-parsing, derived from [deberta-small-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-small-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-small-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-small-japanese-luw-upos")
s="ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-small-japanese-luw-upos")
print(nlp("ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
JorenGij/DT-inventory | 74a29eeb3a0ef00d2e0f18a891d0d6544b81e229 | 2022-05-24T11:19:01.000Z | [
"pytorch",
"decision_transformer",
"transformers"
] | null | false | JorenGij | null | JorenGij/DT-inventory | 3 | null | transformers | 22,464 | Entry not found |
kimcando/test2 | 8ac400a228f3d77ccc32895e35028c611500b8fa | 2022-05-24T13:06:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | kimcando | null | kimcando/test2 | 3 | null | transformers | 22,465 | Entry not found |
Satyamatury/satya-matury-asr-task-2-hindidata | 5fd44f3c08eaaef9804421a71a9c1e66cf5ebe7e | 2022-06-17T07:55:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Satyamatury | null | Satyamatury/satya-matury-asr-task-2-hindidata | 3 | null | transformers | 22,466 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: satya-matury-asr-task-2-hindidata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# satya-matury-asr-task-2-hindidata
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0972
- Wer: 0.9942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.6515 | 44.42 | 400 | 3.0972 | 0.9942 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
vreese2414/autotrain-test-frank-896929583 | 361ddd30ea78862b439bec51a3bf82a55af0934a | 2022-05-24T15:20:01.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:vreese2414/autotrain-data-test-frank",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | vreese2414 | null | vreese2414/autotrain-test-frank-896929583 | 3 | null | transformers | 22,467 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- vreese2414/autotrain-data-test-frank
co2_eq_emissions: 20.85550802376653
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 896929583
- CO2 Emissions (in grams): 20.85550802376653
## Validation Metrics
- Loss: 0.8998094797134399
- Accuracy: 0.717983651226158
- Macro F1: 0.6850466044284794
- Micro F1: 0.717983651226158
- Weighted F1: 0.7093970537930665
- Macro Precision: 0.692166692035814
- Micro Precision: 0.717983651226158
- Weighted Precision: 0.7181745683190863
- Macro Recall: 0.6985625924834511
- Micro Recall: 0.717983651226158
- Weighted Recall: 0.717983651226158
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vreese2414/autotrain-test-frank-896929583
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("vreese2414/autotrain-test-frank-896929583", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("vreese2414/autotrain-test-frank-896929583", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
anablasi/model_10k_lm | 7f42034359c3824e99a3db119ab6a0894dd0c7a4 | 2022-05-24T17:03:38.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | anablasi | null | anablasi/model_10k_lm | 3 | null | transformers | 22,468 | Entry not found |
castorini/afriteva_base | 4f358f728fce8ba7ffe0a642e19672ad96e2215f | 2022-05-24T20:20:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"om",
"am",
"rw",
"rn",
"ha",
"ig",
"pcm",
"so",
"sw",
"ti",
"yo",
"multilingual",
"T5",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | castorini | null | castorini/afriteva_base | 3 | null | transformers | 22,469 | Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
- T5
---
# afriteva_base
## Model desription
AfriTeVa base is a multilingual sequence to sequence model pretrained on 10 African languages
## Languages
Afaan Oromoo(orm), Amharic(amh), Gahuza(gah), Hausa(hau), Igbo(igb), Nigerian Pidgin(pcm), Somali(som), Swahili(swa), Tigrinya(tig), Yoruba(yor)
### More information on the model, dataset:
### The model
- 229M parameters encoder-decoder architecture (T5-like)
- 12 layers, 12 attention heads and 512 token sequence length
### The dataset
- Multilingual: 10 African languages listed above
- 143 Million Tokens (1GB of text data)
- Tokenizer Vocabulary Size: 70,000 tokens
## Intended uses & limitations
`afriteva_base` is pre-trained model and primarily aimed at being fine-tuned on multilingual sequence-to-sequence tasks.
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriteva_base")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("castorini/afriteva_base")
>>> src_text = "ร hรนn แปฬ lรกti di ara wa bรญ?"
>>> tgt_text = "Would you like to be?"
>>> model_inputs = tokenizer(src_text, return_tensors="pt")
>>> with tokenizer.as_target_tokenizer():
labels = tokenizer(tgt_text, return_tensors="pt").input_ids
>>> model(**model_inputs, labels=labels) # forward pass
```
## Training Procedure
For information on training procedures, please refer to the AfriTeVa [paper](#) or [repository](https://github.com/castorini/afriteva)
## BibTex entry and Citation info
coming soon ...
|
castorini/afriteva_large | f3011e8d0c29644740a8bfaed5acece8f1ca8b55 | 2022-05-24T20:20:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"om",
"am",
"rw",
"rn",
"ha",
"ig",
"pcm",
"so",
"sw",
"ti",
"yo",
"multilingual",
"T5",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | castorini | null | castorini/afriteva_large | 3 | 1 | transformers | 22,470 | Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
- T5
---
# afriteva_large
## Model desription
AfriTeVa large is a sequence to sequence model pretrained on 10 African languages
## Languages
Afaan Oromoo(orm), Amharic(amh), Gahuza(gah), Hausa(hau), Igbo(igb), Nigerian Pidgin(pcm), Somali(som), Swahili(swa), Tigrinya(tig), Yoruba(yor)
### More information on the model, dataset:
### The model
- 745M parameters encoder-decoder architecture (T5-like)
- 12 layers, 12 attention heads and 512 token sequence length
### The dataset
- Multilingual: 10 African languages listed above
- 143 Million Tokens (1GB of text data)
- Tokenizer Vocabulary Size: 70,000 tokens
## Intended uses & limitations
`afriteva_large` is pre-trained model and primarily aimed at being fine-tuned on multilingual sequence-to-sequence tasks.
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriteva_large")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("castorini/afriteva_large")
>>> src_text = "ร hรนn แปฬ lรกti di ara wa bรญ?"
>>> tgt_text = "Would you like to be?"
>>> model_inputs = tokenizer(src_text, return_tensors="pt")
>>> with tokenizer.as_target_tokenizer():
labels = tokenizer(tgt_text, return_tensors="pt").input_ids
>>> model(**model_inputs, labels=labels) # forward pass
```
## Training Procedure
For information on training procedures, please refer to the AfriTeVa [paper](#) or [repository](https://github.com/castorini/afriteva)
## BibTex entry and Citation info
coming soon ...
|
mehnaazasad/bert-emotion | 5435003ecd4a60a117c40cb023dd2e33e1569ad3 | 2022-05-25T03:35:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mehnaazasad | null | mehnaazasad/bert-emotion | 3 | null | transformers | 22,471 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- precision
- recall
model-index:
- name: bert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: Precision
type: precision
value: 0.9390697170516101
- name: Recall
type: recall
value: 0.9190197699656729
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2037
- Precision: 0.9391
- Recall: 0.9190
- Fscore: 0.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8561 | 1.0 | 815 | 0.7844 | 0.7575 | 0.6081 | 0.6253 |
| 0.5337 | 2.0 | 1630 | 0.9080 | 0.7567 | 0.7236 | 0.7325 |
| 0.2573 | 3.0 | 2445 | 1.1670 | 0.7262 | 0.7255 | 0.7253 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tanapatentlm/patentdeberta_base_total_512 | d3e891810f112f4fefa1cd91472cda50d8635ada | 2022-05-26T09:55:34.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tanapatentlm | null | tanapatentlm/patentdeberta_base_total_512 | 3 | null | transformers | 22,472 | Entry not found |
rickySaka/fre-med | b22e837b1637122696739ae9e3f35273266f17f4 | 2022-05-26T12:20:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | rickySaka | null | rickySaka/fre-med | 3 | null | transformers | 22,473 | Entry not found |
aioxlabs/dvoice-fongbe | fe1f698871a1b200486fd79e67be6d9ff5a32c34 | 2022-05-28T08:20:03.000Z | [
"wav2vec2",
"feature-extraction",
"fon",
"dataset:commonvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | aioxlabs | null | aioxlabs/dvoice-fongbe | 3 | null | speechbrain | 22,474 | ---
language: "fon"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on DVoice Fongbe (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) Fongbe dataset within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| DVoice Release | Val. CER | Val. WER | Test CER | Test WER |
|:-------------:|:---------------------------:| -----:| -----:| -----:|
| v2.0 | 4.16 | 9.19 | 3.98 | 9.00 |
# Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
# Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
# Transcribing your own audio files (in Fongbe)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="aioxlabs/dvoice-fongbe", savedir="pretrained_models/asr-wav2vec2-dvoice-fon")
asr_model.transcribe_file('./the_path_to_your_audio_file')
```
# Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
# Training
To train the model from scratch, please see our GitHub tutorial [here](https://github.com/AIOXLABS/DVoice).
# Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# About DVoice
DVoice is a community initiative that aims to provide Africa low resources languages with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each one. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling recordings that are retrived from social medias. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola and Soninke.
For this project, AIOX Labs the SI2M Laboratory are joining forces to build the future of technologies together.
# About AIOX Labs
Based in Rabat, London and Paris, AIOX-Labs mobilizes artificial intelligence technologies to meet the business needs and data projects of companies.
- He is at the service of the growth of groups, the optimization of processes or the improvement of the customer experience.
- AIOX-Labs is multi-sector, from fintech to industry, including retail and consumer goods.
- Business ready data products with a solid algorithmic base and adaptability for the specific needs of each client.
- A complementary team made up of doctors in AI and business experts with a solid scientific base and international publications.
Website: [https://www.aiox-labs.com/](https://www.aiox-labs.com/)
# SI2M Laboratory
The Information Systems, Intelligent Systems and Mathematical Modeling Research Laboratory (SI2M) is an academic research laboratory of the National Institute of Statistics and Applied Economics (INSEA). The research areas of the laboratories are Information Systems, Intelligent Systems, Artificial Intelligence, Decision Support, Network and System Security, Mathematical Modelling.
Website: [SI2M Laboratory](https://insea.ac.ma/index.php/pole-recherche/equipe-de-recherche/150-laboratoire-de-recherche-en-systemes-d-information-systemes-intelligents-et-modelisation-mathematique)
# About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# Acknowledgements
This research was supported through computational resources of HPC-MARWAN (www.marwan.ma/hpc) provided by CNRST, Rabat, Morocco. We deeply thank this institution. |
xrverse/distilbert-base-uncased-finetuned-emotion | 676f5cce1c1824ae00022f82536be017f80ae0b8 | 2022-05-26T17:48:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | xrverse | null | xrverse/distilbert-base-uncased-finetuned-emotion | 3 | null | transformers | 22,475 | Entry not found |
finiteautomata/pepe-es | d51c2714f4db04fae4fbc865153c761c9e552f5f | 2022-05-26T17:45:51.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | finiteautomata | null | finiteautomata/pepe-es | 3 | null | transformers | 22,476 | Entry not found |
castorini/mdpr-tied-pft-nq | fbaab0fd02572e96a2d6a65bf41046735ffa6a24 | 2022-05-26T21:15:13.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | castorini | null | castorini/mdpr-tied-pft-nq | 3 | null | transformers | 22,477 | Entry not found |
natdon/DialoGPT_Michael_Scott | 1ea0213c180be902afde027575c651d945ab6edd | 2022-05-27T18:53:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | natdon | null | natdon/DialoGPT_Michael_Scott | 3 | null | transformers | 22,478 | ---
tags:
- conversational
widget:
- text: "Wow this is hard"
- text: "What do you think of Toby?"
- text: "I love you"
- text: "How was prison"
- text: "What are you saying?"
- text: "My name is Toby"
- text: "You should fire Toby"
inference:
parameters:
max_length: 100
no_repeat_ngram_size: 3
do_sample: True
top_k: 100
top_p: 0.7
temperature: 0.7
---
# Michael Scott (The Office) DialoGPT Model |
febreze/distilbert-base-uncased-finetuned-emotion | 308f8123a852f0e057e9a8312325f8a93e2594e3 | 2022-05-31T21:29:29.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | febreze | null | febreze/distilbert-base-uncased-finetuned-emotion | 3 | null | transformers | 22,479 | Entry not found |
WangZeJun/simcse-tiny-chinese-wiki | 11ea1b4ca4124c93f22e53a2e417dd0cbd2b6744 | 2022-06-14T09:16:50.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | WangZeJun | null | WangZeJun/simcse-tiny-chinese-wiki | 3 | null | transformers | 22,480 | https://github.com/zejunwang1/bertorch |
WangZeJun/batchneg-tiny-chinese | aef0085e2cdf83c1ff1e2d66adcdca32601f1755 | 2022-06-14T09:16:06.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | WangZeJun | null | WangZeJun/batchneg-tiny-chinese | 3 | null | transformers | 22,481 | https://github.com/zejunwang1/bertorch |
adache/xlm-roberta-base-finetuned-panx-de | 3733302969df674eb9ca6cdb3d82d543b6e70ec1 | 2022-06-01T05:55:12.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | adache | null | adache/xlm-roberta-base-finetuned-panx-de | 3 | null | transformers | 22,482 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8627004891366169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 |
| 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 |
| 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
PontifexMaximus/Arabic2 | 2236aacb8b2b384126607616549a63f180021613 | 2022-07-09T16:55:10.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:opus_infopankki",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | PontifexMaximus | null | PontifexMaximus/Arabic2 | 3 | null | transformers | 22,483 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_infopankki
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned-ar-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_infopankki
type: opus_infopankki
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 53.5086
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the opus_infopankki dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7636
- Bleu: 53.5086
- Gen Len: 13.5728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 278 | 1.5114 | 35.2767 | 14.2084 |
| 1.6677 | 2.0 | 556 | 1.4025 | 37.5243 | 14.0245 |
| 1.6677 | 3.0 | 834 | 1.3223 | 39.4262 | 13.8101 |
| 1.4743 | 4.0 | 1112 | 1.2567 | 40.7045 | 13.8533 |
| 1.4743 | 5.0 | 1390 | 1.2001 | 41.8356 | 13.8083 |
| 1.3428 | 6.0 | 1668 | 1.1504 | 43.2448 | 13.6958 |
| 1.3428 | 7.0 | 1946 | 1.1072 | 44.177 | 13.6783 |
| 1.2595 | 8.0 | 2224 | 1.0701 | 45.17 | 13.6587 |
| 1.1829 | 9.0 | 2502 | 1.0345 | 45.9612 | 13.6706 |
| 1.1829 | 10.0 | 2780 | 1.0042 | 46.9009 | 13.6236 |
| 1.1188 | 11.0 | 3058 | 0.9760 | 47.7478 | 13.6205 |
| 1.1188 | 12.0 | 3336 | 0.9505 | 48.3082 | 13.6283 |
| 1.0735 | 13.0 | 3614 | 0.9270 | 48.9782 | 13.6203 |
| 1.0735 | 14.0 | 3892 | 0.9060 | 49.5541 | 13.6311 |
| 1.0269 | 15.0 | 4170 | 0.8869 | 49.9905 | 13.6222 |
| 1.0269 | 16.0 | 4448 | 0.8700 | 50.4806 | 13.6047 |
| 0.9983 | 17.0 | 4726 | 0.8538 | 50.9186 | 13.6159 |
| 0.9647 | 18.0 | 5004 | 0.8398 | 51.3492 | 13.6146 |
| 0.9647 | 19.0 | 5282 | 0.8271 | 51.7219 | 13.5275 |
| 0.9398 | 20.0 | 5560 | 0.8156 | 52.0177 | 13.5756 |
| 0.9398 | 21.0 | 5838 | 0.8053 | 52.3619 | 13.5807 |
| 0.9206 | 22.0 | 6116 | 0.7963 | 52.6051 | 13.5652 |
| 0.9206 | 23.0 | 6394 | 0.7885 | 52.8322 | 13.5669 |
| 0.9012 | 24.0 | 6672 | 0.7818 | 52.9402 | 13.5701 |
| 0.9012 | 25.0 | 6950 | 0.7762 | 53.1182 | 13.5695 |
| 0.8965 | 26.0 | 7228 | 0.7717 | 53.1596 | 13.5612 |
| 0.8836 | 27.0 | 7506 | 0.7681 | 53.3116 | 13.5719 |
| 0.8836 | 28.0 | 7784 | 0.7656 | 53.4399 | 13.5758 |
| 0.8777 | 29.0 | 8062 | 0.7642 | 53.4805 | 13.5737 |
| 0.8777 | 30.0 | 8340 | 0.7636 | 53.5086 | 13.5728 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
monsoon-nlp/czech-movie-rating-2 | c6a95ad2a7999fe16d07f0803ccf0cfe2ebb8b83 | 2022-05-27T09:25:38.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | monsoon-nlp | null | monsoon-nlp/czech-movie-rating-2 | 3 | null | transformers | 22,484 | Entry not found |
jkhan447/language-detection-Bert-base-uncased-additional | 10aba1c8f3b2ceaec8f801ed7678a6c9634576cb | 2022-05-27T13:02:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jkhan447 | null | jkhan447/language-detection-Bert-base-uncased-additional | 3 | null | transformers | 22,485 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: language-detection-Bert-base-uncased-additional
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-Bert-base-uncased-additional
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2330
- Accuracy: 0.9497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/david_lynch | 7fe77b2efd4ac3a8ad1a21440c495c2eef88be4a | 2022-06-19T13:12:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/david_lynch | 3 | null | transformers | 22,486 | ---
language: en
thumbnail: http://www.huggingtweets.com/david_lynch/1655644342827/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/63730229/DL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">David Lynch</div>
<div style="text-align: center; font-size: 14px;">@david_lynch</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from David Lynch.
| Data | David Lynch |
| --- | --- |
| Tweets downloaded | 912 |
| Retweets | 29 |
| Short tweets | 21 |
| Tweets kept | 862 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/do5yghsd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @david_lynch's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ddgwjhcj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ddgwjhcj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/david_lynch')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/parishilton | c86644d06daf05375058104a4bbf6925b0939675 | 2022-05-27T11:11:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/parishilton | 3 | null | transformers | 22,487 | ---
language: en
thumbnail: http://www.huggingtweets.com/parishilton/1653649884348/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1519127596868374528/AyJv6gmG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ParisHilton.eth</div>
<div style="text-align: center; font-size: 14px;">@parishilton</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ParisHilton.eth.
| Data | ParisHilton.eth |
| --- | --- |
| Tweets downloaded | 3211 |
| Retweets | 1563 |
| Short tweets | 407 |
| Tweets kept | 1241 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17bxqhg6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @parishilton's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8b45v2wu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8b45v2wu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/parishilton')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jplu/adel-dbpedia-rerank | 3537e8bd4e44bd8ddc672880d4690f9432d88ddf | 2022-05-27T22:01:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jplu | null | jplu/adel-dbpedia-rerank | 3 | null | transformers | 22,488 | Entry not found |
nizamudma/bart-finetuned-cnn-3 | fbbd369289559c18efa4885e8d492589985a0b6e | 2022-05-29T13:54:17.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nizamudma | null | nizamudma/bart-finetuned-cnn-3 | 3 | null | transformers | 22,489 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: bart-finetuned-cnn-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 40.201
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-cnn-3
This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-3](https://huggingface.co/sshleifer/distilbart-xsum-12-3) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0751
- Rouge1: 40.201
- Rouge2: 18.8482
- Rougel: 29.4439
- Rougelsum: 37.416
- Gen Len: 56.7545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.276 | 1.0 | 8883 | 2.1762 | 39.6581 | 18.3333 | 28.7765 | 36.7688 | 58.5386 |
| 2.0806 | 2.0 | 17766 | 2.0909 | 40.0328 | 18.8026 | 29.417 | 37.3508 | 56.6804 |
| 1.9615 | 3.0 | 26649 | 2.0751 | 40.201 | 18.8482 | 29.4439 | 37.416 | 56.7545 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Splend1dchan/wav2vec2-large-lv60_t5lephone-small_nofreeze_bs16_forMINDS.en.all | d8036f41f6091a719b2e76666f9e072e30ead63c | 2022-05-30T01:08:01.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_t5lephone-small_nofreeze_bs16_forMINDS.en.all | 3 | null | transformers | 22,490 | wav2vec2 -> t5lephone
bs = 16
dropout = 0.1
performance : 40%
{
"architectures": [
"SpeechMixEEDT5"
],
"decoder": {
"_name_or_path": "voidful/phoneme_byt5",
"add_cross_attention": true,
"architectures": [
"T5ForConditionalGeneration"
],
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"d_ff": 3584,
"d_kv": 64,
"d_model": 1472,
"decoder_start_token_id": 0,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout_rate": 0.1,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"gradient_checkpointing": false,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_factor": 1.0,
"is_decoder": true,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_epsilon": 1e-06,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "t5",
"no_repeat_ngram_size": 0,
"num_beam_groups": 1,
"num_beams": 1,
"num_decoder_layers": 4,
"num_heads": 6,
"num_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"relative_attention_max_distance": 128,
"relative_attention_num_buckets": 32,
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": false,
"tokenizer_class": "ByT5Tokenizer",
"top_k": 50,
"top_p": 1.0,
"torch_dtype": "float32",
"torchscript": false,
"transformers_version": "4.17.0",
"typical_p": 1.0,
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 384
},
"encoder": {
"_name_or_path": "facebook/wav2vec2-large-lv60",
"activation_dropout": 0.1,
"adapter_kernel_size": 3,
"adapter_stride": 2,
"add_adapter": false,
"add_cross_attention": false,
"apply_spec_augment": true,
"architectures": [
"Wav2Vec2ForPreTraining"
],
"attention_dropout": 0.1,
"bad_words_ids": null,
"bos_token_id": 1,
"chunk_size_feed_forward": 0,
"classifier_proj_size": 256,
"codevector_dim": 768,
"contrastive_logits_temperature": 0.1,
"conv_bias": true,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"cross_attention_hidden_size": null,
"ctc_loss_reduction": "sum",
"ctc_zero_infinity": false,
"decoder_start_token_id": null,
"diversity_loss_weight": 0.1,
"diversity_penalty": 0.0,
"do_sample": false,
"do_stable_layer_norm": true,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_dropout": 0.0,
"feat_extract_norm": "layer",
"feat_proj_dropout": 0.1,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.1,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"layerdrop": 0.0,
"length_penalty": 1.0,
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_prob": 0.05,
"max_length": 20,
"min_length": 0,
"model_type": "wav2vec2",
"no_repeat_ngram_size": 0,
"num_adapter_layers": 3,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 128,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"num_negatives": 100,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_size": 1024,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"prefix": null,
"problem_type": null,
"proj_codevector_dim": 768,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"tdnn_dilation": [
1,
2,
3,
1,
1
],
"tdnn_dim": [
512,
512,
512,
512,
1500
],
"tdnn_kernel": [
5,
3,
3,
1,
1
],
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.17.0",
"typical_p": 1.0,
"use_bfloat16": false,
"use_weighted_layer_sum": false,
"vocab_size": 32,
"xvector_output_dim": 512
},
"is_encoder_decoder": true,
"model_type": "speechmix",
"torch_dtype": "float32",
"transformers_version": null
}
|
knurm/my-finetuned-xml-roberta2 | d098a3c65233a31c7efd18ffc11f68e0867efb71 | 2022-05-29T19:48:10.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | knurm | null | knurm/my-finetuned-xml-roberta2 | 3 | null | transformers | 22,491 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: my-finetuned-xml-roberta2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-finetuned-xml-roberta2
This model is a fine-tuned version of [knurm/my-finetuned-xml-roberta](https://huggingface.co/knurm/my-finetuned-xml-roberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.4491 | 1.0 | 5652 | 3.3339 |
| 3.171 | 2.0 | 11304 | 3.2681 |
| 2.9518 | 3.0 | 16956 | 3.3003 |
| 2.7305 | 4.0 | 22608 | 3.3447 |
| 2.5974 | 5.0 | 28260 | 3.4644 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
stevemobs/deberta-base-finetuned-squad1-newsqa | 73ddebd0c775849482b6d6c03fab0bc1c64b4087 | 2022-05-29T21:46:10.000Z | [
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/deberta-base-finetuned-squad1-newsqa | 3 | null | transformers | 22,492 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-base-finetuned-squad1-newsqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad1-newsqa
This model is a fine-tuned version of [stevemobs/deberta-base-finetuned-squad1](https://huggingface.co/stevemobs/deberta-base-finetuned-squad1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6703 | 1.0 | 17307 | 0.7207 |
| 0.4775 | 2.0 | 34614 | 0.7556 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Santarabantoosoo/Clinical-Longformer-MLM-opnote | 65cf2903c19e382d9ff830095c651d42e5160d42 | 2022-05-30T08:23:25.000Z | [
"pytorch",
"tensorboard",
"longformer",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Santarabantoosoo | null | Santarabantoosoo/Clinical-Longformer-MLM-opnote | 3 | null | transformers | 22,493 | ---
tags:
- generated_from_trainer
model-index:
- name: Clinical-Longformer-MLM-opnote
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clinical-Longformer-MLM-opnote
This model is a fine-tuned version of [yikuan8/Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 266 | 0.9606 |
| 1.1655 | 2.0 | 532 | 0.8677 |
| 1.1655 | 3.0 | 798 | 0.8195 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.10.1
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Violetto/my-dialogue-summarization-model | a9049baccbba00af43f14d28b0e6323d0a11c662 | 2022-05-31T07:41:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Violetto | null | Violetto/my-dialogue-summarization-model | 3 | null | transformers | 22,494 | Entry not found |
AbhilashDatta/T5_qgen-squad_v1 | c56492073e432cc6def9bb1455bf2777f97e7f01 | 2022-05-30T16:19:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | AbhilashDatta | null | AbhilashDatta/T5_qgen-squad_v1 | 3 | null | transformers | 22,495 | ---
license: afl-3.0
---
# Question generation using T5 transformer trained on SQuAD
<h2> <i>Input format: context: "..." answer: "..." </i></h2>
Import the pretrained model as well as tokenizer:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained('AbhilashDatta/T5_qgen-squad_v1')
tokenizer = T5Tokenizer.from_pretrained('AbhilashDatta/T5_qgen-squad_v1')
```
Then use the tokenizer to encode/decode and model to generate:
```
input = "context: My name is Abhilash Datta. answer: Abhilash"
batch = tokenizer(input, padding='longest', max_length=512, return_tensors='pt')
inputs_batch = batch['input_ids'][0]
inputs_batch = torch.unsqueeze(inputs_batch, 0)
ques_id = model.generate(inputs_batch, max_length=100, early_stopping=True)
ques_batch = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in ques_id]
print(ques_batch)
```
Output:
```
['what is my name']
``` |
Splend1dchan/xtreme_s_xlsr_300m_minds14.en-US | c8b593fe3f4ebedccab6c8ad5e0a856f8a40dc40 | 2022-05-30T09:36:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | Splend1dchan | null | Splend1dchan/xtreme_s_xlsr_300m_minds14.en-US | 3 | null | transformers | 22,496 | {'eval_loss': 0.8433557152748108, 'eval_f1': 0.7774690927124368, 'eval_accuracy': 0.7943262411347518, 'eval_runtime': 15.6704, 'eval_samples_per_second': 17.996, 'eval_steps_per_second': 1.149, 'epoch': 49.73} |
jkhan447/sarcasm-detection-RoBerta-base-CR | 2c536ba4c833e1f822f2984662bcb1a7565778f9 | 2022-05-30T14:57:19.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | jkhan447 | null | jkhan447/sarcasm-detection-RoBerta-base-CR | 3 | null | transformers | 22,497 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-RoBerta-base-CR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-RoBerta-base-CR
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0240
- Accuracy: 0.726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
wuesten/sentiment-analysis-fh-kiel | 3532cd2a1d9bb2a7ff1f2bae18e998911870a5d2 | 2022-05-30T19:17:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | wuesten | null | wuesten/sentiment-analysis-fh-kiel | 3 | 1 | transformers | 22,498 | # Sentiment Analysis
For an university project at the University of Applied Sciences Kiel, we conducted a sentiment analysis with the aim of classifying restaurant ratings. The model of "nlptown/bert-base-multilingual-uncased-sentiment" was used as a pre-trained transformer. As a starting value, an accuracy of 63% was already achieved. Based on this, the transformer was fine-tuned to yelp ratings from Hamburg. After that, an accuracy of 82% was achieved.
|
srini98/distilbert-base-uncasedclassification | 8c2d39e6a535caa372d161e23f38f8e02ff8aed3 | 2022-05-30T15:59:28.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | srini98 | null | srini98/distilbert-base-uncasedclassification | 3 | null | transformers | 22,499 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.