modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lysandre/dum | 7ca05142c3d15590084e70249b3687b66c4aeba3 | 2022-06-14T08:45:44.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:sst2",
"transformers",
"license:apache-2.0"
] | text-classification | false | lysandre | null | lysandre/dum | 23 | null | transformers | 7,900 | ---
language: en
license: apache-2.0
datasets:
- sst2
---
# Sentiment Analysis
This is a BERT model fine-tuned for sentiment analysis. |
manishiitg/distilrobert-base-squadv2-328seq-128stride-test | 8776dc47fd58e19672d4be7a864c186efa236f18 | 2021-05-20T17:43:42.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | manishiitg | null | manishiitg/distilrobert-base-squadv2-328seq-128stride-test | 23 | null | transformers | 7,901 | Entry not found |
maple/bert-large-cased | 307bf5fe1b972a63b748580a8e6d6dc7bac912e0 | 2022-01-03T07:39:42.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | maple | null | maple/bert-large-cased | 23 | null | transformers | 7,902 | Entry not found |
moussaKam/frugalscore_tiny_bert-base_mover-score | e691626ad7864f8c433a77cf4a8946b2c0c79452 | 2022-05-11T11:04:23.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"transformers"
] | text-classification | false | moussaKam | null | moussaKam/frugalscore_tiny_bert-base_mover-score | 23 | null | transformers | 7,903 | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
mrm8488/distilgpt2-finetuned-bookcopus-10 | ec1424e27449a3cee5c6aea36c60d241c70ab140 | 2021-05-23T10:21:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mrm8488 | null | mrm8488/distilgpt2-finetuned-bookcopus-10 | 23 | null | transformers | 7,904 | Entry not found |
mrm8488/distilroberta-base-finetuned-suicide-depression | 7574c32aa783a63116539e20f97f8a0c336220bd | 2021-10-14T09:26:23.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/distilroberta-base-finetuned-suicide-depression | 23 | 3 | transformers | 7,905 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
widget:
- text: "It's in the back of my mind. I'm not sure I'll be ok. Not sure I can deal with this. I'll try...I will try. Even though it's hard to see the point. But...this still isn't off the table."
model-index:
- name: distilroberta-base-finetuned-suicide-depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-suicide-depression
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6622
- Accuracy: 0.7158
## Model description
Just a **POC** of a Transformer fine-tuned on [SDCNL](https://github.com/ayaanzhaque/SDCNL) dataset for suicide (label 1) or depression (label 0) detection in tweets.
**DO NOT use it in production**
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 214 | 0.6204 | 0.6632 |
| No log | 2.0 | 428 | 0.6622 | 0.7158 |
| 0.5244 | 3.0 | 642 | 0.7312 | 0.6684 |
| 0.5244 | 4.0 | 856 | 0.9711 | 0.7105 |
| 0.2876 | 5.0 | 1070 | 1.1620 | 0.7 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.0
- Tokenizers 0.10.3
|
mrm8488/wav2vec2-large-xlsr-53-spanish | 9539f36ad626a8abadc64e2b904fe3f0ff37bc49 | 2021-07-06T13:14:39.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mrm8488 | null | mrm8488/wav2vec2-large-xlsr-53-spanish | 23 | 1 | transformers | 7,906 | ---
language: es
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Spanish Manuel Romero
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice es
type: common_voice
args: es
metrics:
- name: Test WER
type: wer
value: ???
---
# Wav2Vec2-Large-XLSR-53-Spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "es", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "es", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ??? |
mys/mt5-small-turkish-question-paraphrasing | fd37d138c0d210a3e6f33dc734c47af4aa2c47e6 | 2021-11-07T08:26:51.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mys | null | mys/mt5-small-turkish-question-paraphrasing | 23 | 2 | transformers | 7,907 | ## Overview
This model is a finetuned version of [mt5-small](https://huggingface.co/google/mt5-small) for question paraphrasing task in Turkish. As a generator model, its capabilities are currently investigated and there is an ongoing effort to further improve it. You can raise an issue [in this GitHub repo](https://github.com/monatis/tqp) for any comments, suggestions or interesting findings when using this model.
## Usage
You can generate 5 paraphrases for the input question with The simple code below.
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "mys/mt5-small-turkish-question-paraphrasing"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
tokens = tokenizer.encode_plus("Yarın toplantı kaçta başlıyor?", return_tensors='pt')
paraphrases = model.generate(tokens['input_ids'], max_length=128, num_return_sequences=5, num_beams=5)
tokenizer.batch_decode(paraphrases, skip_special_tokens=True)
```
And the output will be something like:
```shell
['Yarın toplantı ne zaman başlıyor?',
'Yarın toplantı saat kaçta başlıyor?',
'Yarın toplantı saat kaçta başlar?',
'Yarın toplantı ne zaman başlayacak?',
'Yarın toplantı ne zaman başlar?']
```
## Dataset
I used [TQP dataset V0.1](https://github.com/monatis/tqp) that I've published just recently. This model should be taken as as a baseline model for TQP dataset. A cleaning and further improvements in the dataset and an elaborate hyperparameter tuning may boost the performance.
## Citation
If you find the dataset or model useful for your research, [consider citation](https://zenodo.org/record/4719801#.YIbI45AzZPZ). |
navsad/navid_test_bert | 56b40fd8d9564d563bd5523a5a119323f85d61b8 | 2022-02-02T04:52:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | navsad | null | navsad/navid_test_bert | 23 | null | transformers | 7,908 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: navid_test_bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5834463254140851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# navid_test_bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8149
- Matthews Correlation: 0.5834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4598 | 1.0 | 1069 | 0.4919 | 0.5314 |
| 0.3228 | 2.0 | 2138 | 0.6362 | 0.5701 |
| 0.17 | 3.0 | 3207 | 0.8149 | 0.5834 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
pandyaved98/DialoGPT-small-AlchemyBot | 877a86d93e27c72935fc14e0d455b9971fbf27f6 | 2021-11-29T15:33:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pandyaved98 | null | pandyaved98/DialoGPT-small-AlchemyBot | 23 | 1 | transformers | 7,909 | ---
tags:
- conversational
---
# AlchemyBot DialoGPT Model |
panggi/t5-small-indonesian-summarization-cased | b0c72296041ebf885d071be74c1590844069c7c4 | 2020-12-19T18:01:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"id",
"dataset:indosum",
"transformers",
"pipeline:summarization",
"summarization",
"autotrain_compatible"
] | summarization | false | panggi | null | panggi/t5-small-indonesian-summarization-cased | 23 | null | transformers | 7,910 | ---
language: id
tags:
- pipeline:summarization
- summarization
- t5
datasets:
- indosum
---
# Indonesian T5 Summarization Small Model
Finetuned T5 small summarization model for Indonesian.
## Finetuning Corpus
`t5-small-indonesian-summarization-cased` model is based on `t5-small-bahasa-summarization-cased` by [huseinzol05](https://huggingface.co/huseinzol05), finetuned using [indosum](https://github.com/kata-ai/indosum) dataset.
## Load Finetuned Model
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("panggi/t5-small-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("panggi/t5-small-indonesian-summarization-cased")
```
## Code Sample
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("panggi/t5-small-indonesian-summarization-cased")
model = T5ForConditionalGeneration.from_pretrained("panggi/t5-small-indonesian-summarization-cased")
# https://www.sehatq.com/artikel/apa-itu-dispepsia-fungsional-ketahui-gejala-dan-faktor-risikonya
ARTICLE_TO_SUMMARIZE = "Secara umum, dispepsia adalah kumpulan gejala pada saluran pencernaan seperti nyeri, sensasi terbakar, dan rasa tidak nyaman pada perut bagian atas. Pada beberapa kasus, dispepsia yang dialami seseorang tidak dapat diketahui penyebabnya. Jenis dispepsia ini disebut dengan dispepsia fungsional. Apa saja gejala dispepsia fungsional? Apa itu dispepsia fungsional? Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas atau ulu hati. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih. Dispepsia ini memiliki nama “fungsional” karena kumpulan gejalanya tidak memiliki penyebab yang jelas. Dilihat dari fungsi dan struktur saluran pencernaan, dokter tidak menemukan hal yang salah. Namun, gejalanya bisa sangat mengganggu dan menyiksa. Dispepsia fungsional disebut juga dengan dispepsia nonulkus. Diperkirakan bahwa 20% masyarakat dunia menderita dispepsia fungsional. Kondisi ini berisiko tinggi dialami oleh wanita, perokok, dan orang yang mengonsumsi obat anti-peradangan nonsteroid (NSAID). Dispepsia fungsional bisa bersifat kronis dan mengganggu kehidupan penderitanya. Namun beruntung, ada beberapa strategi yang bisa diterapkan untuk mengendalikan gejala dispepsia ini. Strategi tersebut termasuk perubahan gaya hidup, obat-obatan, dan terapi.Ragam gejala dispepsia fungsional Gejala dispepsia fungsional dapat bervariasi antara satu pasien dengan pasien lain. Beberapa tanda yang bisa dirasakan seseorang, yaitu: Sensasi terbakar atau nyeri di saluran pencernaan bagian atas Perut kembung Cepat merasa kenyang walau baru makan sedikit Mual Muntah Bersendawa Rasa asam di mulut Penurunan berat badan Tekanan psikologis terkait dengan kondisi yang dialami Apa sebenarnya penyebab dispepsia fungsional? Sebagai penyakit fungsional, dokter mengkategorikan dispepsia ini sebagai penyakit yang tidak diketahui penyebabnya. Hanya saja, beberapa faktor bisa meningkatkan risiko seseorang terkena dispepsia fungsional. Faktor risiko tersebut, termasuk: Alergi terhadap zat tertentu Perubahan mikrobioma usus Infeksi, seperti yang dipicu oleh bakteriHelicobacter pylori Sekresi asam lambung yang tidak normal Peradangan pada saluran pencernaan bagian atas Gangguan pada fungsi lambung untuk mencerna makanan Pola makan tertentu Gaya hidup tidak sehat Stres Kecemasan atau depresi Efek samping pemakaian obat seperti obat antiinflamasi nonsteroid Penanganan untuk dispepsia fungsional Ada banyak pilihan pengobatan untuk dispepsia fungsional. Seperti yang disampaikan di atas, tidak ada penyebab tunggal dispepsia ini yang bisa diketahui. Gejala yang dialami antara satu pasien juga mungkin amat berbeda dari orang lain. Dengan demikian, jenis pengobatan dispepsia fungsional juga akan bervariasi. Beberapa pilihan strategi penanganan untuk dispepsia fungsional, meliputi: 1. Obat-obatan Ada beberapa jenis obat yang mungkin akan diberikan dokter, seperti Obat penetral asam lambung yang disebut penghambat reseptor H2 Obat penghambat produksi asam lambung yang disebut proton pump inhibitors Obat untuk mengendalikan gas di perut yang mengandung simetikon Antidepresan seperti amitriptyline Obat penguat kerongkongan yang disebut agen prokinetik Obat untuk pengosongan isi lambung seperti metoclopramide Antibiotik jika dokter mendeteksi adanya infeksi bakteri H. pylori 2. Anjuran terkait perubahan gaya hidup Selain obat-obatan, dokter akan memberikan rekomendasi perubahan gaya hidup yang harus diterapkan pasien. Tips terkait perubahan gaya hidup termasuk: Makan lebih sering namun dengan porsi yang lebih sedikit Menjauhi makanan berlemak karena memperlambat pengosongan makanan di lambung Menjauhi jenis makanan lain yang memicu gejala dispepsia, seperti makanan pedas, makanan tinggi asam, produk susu, dan produk kafein Menjauhi rokok Dokter juga akan meminta pasien untuk mencari cara untuk mengendalikan stres, tidur dengan kepala lebih tinggi, dan menjalankan usaha untuk mengendalikan berat badan. Apakah penyakit dispepsia itu berbahaya? Dispepsia, termasuk dispepsia fungsional, dapat menjadi kronis dengan gejala yang menyiksa. Jika tidak ditangani, dispepsia tentu dapat berbahaya dan mengganggu kehidupan pasien. Segera hubungi dokter apabila Anda merasakan gejala dispepsia, terlebih jika tidak merespons obat-obatan yang dijual bebas. Catatan dari SehatQ Dispepsia fungsional adalah kumpulan gejala pada saluran pencernaan bagian atas yang tidak diketahui penyebabnya. Dispepsia fungsional dapat ditangani dengan kombinasi obat-obatan dan perubahan gaya hidup. Jika masih memiliki pertanyaan terkait dispepsia fungsional, Anda bisa menanyakan ke dokter di aplikasi kesehatan keluarga SehatQ. Aplikasi SehatQ bisa diunduh gratis di Appstore dan Playstore yang berikan informasi penyakit terpercaya."
# generate summary
input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt')
summary_ids = model.generate(input_ids,
max_length=100,
num_beams=2,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
no_repeat_ngram_size=2,
use_cache=True)
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary_text)
```
Output:
```
'Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih.
```
## Acknowledgement
Thanks to Immanuel Drexel for his article [Text Summarization, Extractive, T5, Bahasa Indonesia, Huggingface’s Transformers](https://medium.com/analytics-vidhya/text-summarization-t5-bahasa-indonesia-huggingfaces-transformers-ee9bfe368e2f)
|
plum/bert-large-cased | 6462870803e95b422dadb3e5ab15166878708330 | 2022-01-04T23:05:59.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | plum | null | plum/bert-large-cased | 23 | null | transformers | 7,911 | Entry not found |
pritamdeka/S-Scibert-snli-multinli-stsb | 314f56eb315692762db7c5b1d02a8f14193685b4 | 2022-05-09T10:03:33.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | pritamdeka | null | pritamdeka/S-Scibert-snli-multinli-stsb | 23 | null | sentence-transformers | 7,912 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# pritamdeka/S-Scibert-snli-multinli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/S-Scibert-snli-multinli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/S-Scibert-snli-multinli-stsb')
model = AutoModel.from_pretrained('pritamdeka/S-Scibert-snli-multinli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pritoms/gpt-neo-125M-philosophical-investigation | 17164b7c802d0351afedba6e4d4e9a8ef71c7d97 | 2022-01-11T06:18:34.000Z | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | pritoms | null | pritoms/gpt-neo-125M-philosophical-investigation | 23 | null | transformers | 7,913 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-philosophical-investigation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-philosophical-investigation
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 3.4901 |
| No log | 2.0 | 14 | 3.4550 |
| No log | 3.0 | 21 | 3.4443 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
royeis/T5-Factual-Classifier-V1 | bea492321f59f322094eabc737fc389ee0f47601 | 2021-06-23T14:01:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | royeis | null | royeis/T5-Factual-Classifier-V1 | 23 | null | transformers | 7,914 | Entry not found |
sagittariusA/gender_classifier_cs | 090c3e9855bd1f2a5a654d3ed98da9d7b74559d2 | 2021-11-09T22:41:12.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | sagittariusA | null | sagittariusA/gender_classifier_cs | 23 | null | transformers | 7,915 | Entry not found |
sanchit-gandhi/wav2vec2-2-gpt2-grid-search | 9f69036d12f516265208efd8c02bdf3bdf692989 | 2022-03-07T13:18:03.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-gpt2-grid-search | 23 | null | transformers | 7,916 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sentence-transformers/distilbert-base-nli-max-tokens | b406dc6411aa5f75c3703b7aa06851c8ddff8916 | 2022-06-16T00:21:25.000Z | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | sentence-transformers | null | sentence-transformers/distilbert-base-nli-max-tokens | 23 | null | sentence-transformers | 7,917 | ---
pipeline_tag: feature-extraction
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/distilbert-base-nli-max-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/distilbert-base-nli-max-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
return torch.max(token_embeddings, 1)[0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilbert-base-nli-max-tokens')
model = AutoModel.from_pretrained('sentence-transformers/distilbert-base-nli-max-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilbert-base-nli-max-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
severo/autonlp-sentiment_detection-1781580 | 179a1d752c5cdd039bfe70ff785f0e1999b7cacb | 2021-06-18T18:20:55.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:severo/autonlp-data-sentiment_detection-3c8bcd36",
"transformers",
"autonlp"
] | text-classification | false | severo | null | severo/autonlp-sentiment_detection-1781580 | 23 | 1 | transformers | 7,918 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- severo/autonlp-data-sentiment_detection-3c8bcd36
---
# Model Trained Using AutoNLP
_debug - I want to update this model_
- Problem type: Binary Classification
- Model ID: 1781580
## Validation Metrics
- Loss: 0.16026505827903748
- Accuracy: 0.9426
- Precision: 0.9305057745917961
- Recall: 0.95406288280931
- AUC: 0.9861051024994563
- F1: 0.9421370967741935
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/severo/autonlp-sentiment_detection-1781580
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("severo/autonlp-sentiment_detection-1781580", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("severo/autonlp-sentiment_detection-1781580", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
toastynews/xlnet-hongkongese-base | e4a8655e729603edc8e54baa3ce2b95dfb757342 | 2020-07-07T17:52:07.000Z | [
"pytorch",
"tf",
"xlnet",
"text-generation",
"yue",
"transformers",
"license:apache-2.0"
] | text-generation | false | toastynews | null | toastynews/xlnet-hongkongese-base | 23 | null | transformers | 7,919 | ---
language: yue
license: apache-2.0
metrics:
- DRCD
- openrice-senti
- lihkg-cat
- wordshk-sem
---
# XLNet Hongkongese Base
## Model description
XLNet trained exclusively with data from Hong Kong. A signaficant amount of Hongkongese/Cantonese/Yue is included in the training data.
## Intended uses & limitations
This model is an alternative to Chinese models. It may offer better performance for tasks catering to the langauge usage of Hong Kongers. Yue Wikipedia is used which is much smaller than Chinese Wikipedia; this model will lack the breath of knowledge compared to other Chinese models.
#### How to use
This is the base model trained from the official repo. Further finetuning will be needed for use on downstream tasks. It can also be used to generate text.
#### Limitations and bias
The training data consists of mostly news articles and blogs. There is probably a bias towards formal language usage.
For text generation, like other XLNet models, a longer context will help generate better text. Overall result is not as good as GPT-2.
## Training data
The following is the list of data sources. Total characters is about 507M.
| Data | % |
| ------------------------------------------------- | --: |
| News Articles / Blogs | 58% |
| Yue Wikipedia / EVCHK | 18% |
| Restaurant Reviews | 12% |
| Forum Threads | 12% |
| Online Fiction | 1% |
The following is the distribution of different languages within the corpus.
| Language | % |
| ------------------------------------------------- | --: |
| Standard Chinese | 62% |
| Hongkongese | 30% |
| English | 8% |
## Training procedure
Model was trained on a single TPUv3 from the official repo with the default parameters.
| Parameter | Value |
| ------------------------------------------------ | ----: |
| Batch Size | 32 |
| Max Sequence Size | 512 |
| Vocab Size | 32000 |
*Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)*
## Eval results
Average evaluation task results over 10 runs. Comparison using the original repo model and code. Chinese models are available from [Joint Laboratory of HIT and iFLYTEK Research (HFL)](https://huggingface.co/hfl)
| Model | DRCD (EM/F1) | openrice-senti | lihkg-cat | wordshk-sem |
|:-----------:|:------------:|:--------------:|:---------:|:-----------:|
| Chinese | 82.8 / 91.8 | 79.8 | 70.7 | 72.0 / 78.9*|
| Hongkongese | 76.1 / 76.1 | 81.4 | 69.5 | 66.7 / 87.3*|
\* With the default of 3 epoches, 6 of 10 Chinese finetuned models have accuracy of 66.7 (always negative baseline). All Hongkongese finetuned models have accuracy of 66.7. The \* values are the accuracy after 24 epoches. |
tuhailong/cross-encoder-bert-base | 61198bd89bd36b9667aec7a66441e2a2e473fcd2 | 2022-04-20T02:42:39.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:dialogue",
"transformers",
"sbert"
] | text-classification | false | tuhailong | null | tuhailong/cross-encoder-bert-base | 23 | null | transformers | 7,920 | ---
language: zh
tags:
- sbert
datasets:
- dialogue
---
# Data
train data is similarity sentence data from E-commerce dialogue, about 20w sentence pairs.
## Model
model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is cross-encoder
### Usage
```python
>>> from sentence_transformers.cross_encoder import CrossEncoder
>>> model = CrossEncoder('tuhailong/cross-encoder')
>>> scores = model.predict([["今天天气不错", "今天心情不错"]])
>>> print(scores)
``` |
uer/chinese_roberta_L-6_H-512 | 34d41591eebf00951e61a614af808468c4c9bbfc | 2022-07-15T08:13:20.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-6_H-512 | 23 | null | transformers | 7,921 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
verloop/Hinglish-Bert-Class | 4d1ffe6c3a246398b9da0d451f83a892ffa18635 | 2021-05-20T08:56:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | verloop | null | verloop/Hinglish-Bert-Class | 23 | 1 | transformers | 7,922 | Entry not found |
vidhur2k/mBERT-French-Mono | 8d47dd80aed14404af78737057820be8997758cf | 2021-12-03T04:50:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vidhur2k | null | vidhur2k/mBERT-French-Mono | 23 | null | transformers | 7,923 | Entry not found |
wangfan/jdt-fin-roberta-wwm | ef5ec78cb8e478e327569bbca955140ab5908b39 | 2022-05-19T03:40:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"dataset:finance",
"transformers",
"roberta-wwm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | wangfan | null | wangfan/jdt-fin-roberta-wwm | 23 | null | transformers | 7,924 | ---
language: zh
tags:
- roberta-wwm
license: apache-2.0
datasets:
- finance
---
在众多业务中,越来越频繁的使用预训练语言模型(Pre-trained Language Models),为了在金融场景下各任务中取得更好效果,我们发布了jdt-fin-roberta-wwm模型
#### 模型&下载
* `base模型`:12-layer, 768-hidden, 12-heads, 110M parameters
| 模型简称 | 京盘下载 |
| :----: | :----:|
| fin-roberta-wwm | [Tensorflow](https://3.cn/103c-hwSS)/[Pytorch](https://3.cn/103c-izpe) |
| fin-roberta-wwm-large | todo |
#### 快速加载
依托于[Huggingface-Transformers](https://github.com/huggingface/transformers),可轻松调用以上模型。
```
tokenizer = BertTokenizer.from_pretrained("MODEL_NAME")
model = BertModel.from_pretrained("MODEL_NAME")
```
**注意:本目录中的所有模型均使用BertTokenizer以及BertModel加载,请勿使用RobertaTokenizer/RobertaModel!**
其中`MODEL_NAME`对应列表如下:
| 模型名 | MODEL_NAME |
| - | - |
| fin-roberta-wwm | wangfan/jdt-fin-roberta-wwm |
| fin-roberta-wwm-large | todo |
#### 任务效果
| Task | NER | 关系抽取 | 事件抽取 | 指标抽取 | 实体链接 |
|:----:|:-- :|:------:|:-------:|:-------:|:------:|
| Our |93.88| 79.02 | 91.99 | 94.28| 86.72 |
| Roberta-wwm |93.47| 76.99 | 91.58 | 93.98| 85.20 |
|
Anthos23/FS-distilroberta-fine-tuned | 2850ed4017485cdae8f78ea0ae2ab658206be8a2 | 2022-03-04T13:00:00.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Anthos23 | null | Anthos23/FS-distilroberta-fine-tuned | 23 | null | transformers | 7,925 | Entry not found |
facebook/wav2vec2-base-fr-voxpopuli-v2 | b610edc383f3af2cef6361656fda66885201d026 | 2022-02-27T13:12:05.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"fr",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-fr-voxpopuli-v2 | 23 | 1 | transformers | 7,926 | ---
language: fr
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **fr** on **22.8k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **fr**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
nguyenvulebinh/spoken-norm-taggen | 7e90e9609d242fc7a268e5e707d0eae63d2ab0ea | 2022-03-01T09:10:45.000Z | [
"pytorch",
"transformers",
"license:cc-by-nc-4.0"
] | null | false | nguyenvulebinh | null | nguyenvulebinh/spoken-norm-taggen | 23 | 1 | transformers | 7,927 | ---
license: cc-by-nc-4.0
---
|
datnth1709/Phobert-classifier | 412842e9758e77ff6457ba1a205a5fb440b1c8ba | 2022-03-02T18:29:53.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"arxiv:2003.00744",
"transformers",
"autotrain_compatible"
] | fill-mask | false | datnth1709 | null | datnth1709/Phobert-classifier | 23 | null | transformers | 7,928 | # <a name="introduction"></a> PhoBERT: Pre-trained language models for Vietnamese
Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
- Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance.
- PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
The general architecture and experimental results of PhoBERT can be found in our EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744):
@article{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
journal = {Findings of EMNLP},
year = {2020}
}
**Please CITE** our paper when PhoBERT is used to help produce published results or is incorporated into other software.
For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
### Installation <a name="install2"></a>
- Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
- Install `transformers`:
- `git clone https://github.com/huggingface/transformers.git`
- `cd transformers`
- `pip3 install --upgrade .`
### Pre-trained models <a name="models2"></a>
Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/phobert-base` | 135M | base | 20GB of texts
`vinai/phobert-large` | 370M | large | 20GB of texts
### Example usage <a name="usage2"></a>
```python
import torch
from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = phobert(input_ids) # Models outputs are now tuples
## With TensorFlow 2.0+:
# from transformers import TFAutoModel
# phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
```
|
l3cube-pune/hing-mbert | 2eed9350653a8a7601042cc6afa6ca1065f20b97 | 2022-06-26T15:12:58.000Z | [
"pytorch",
"bert",
"fill-mask",
"hi",
"en",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"transformers",
"codemix",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | l3cube-pune | null | l3cube-pune/hing-mbert | 23 | 1 | transformers | 7,929 | ---
license: cc-by-4.0
language:
- hi
- en
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingMBERT
HingBERT is a Hindi-English code-mixed BERT model trained on roman text. It is a mBERT model fine-tuned on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
```
@InProceedings{nayak-joshi:2022:WILDRE6,
author = {Nayak, Ravindra and Joshi, Raviraj},
title = {L3Cube-HingCorpus and HingBERT: A Code Mixed Hindi-English Dataset and BERT Language Models},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {7--12}
}
``` |
mitiku/AmharicWICPostag10Tags | d4551db9e59a9e0e67f7e960fbf9a0e5ad9067f6 | 2022-03-20T10:11:33.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | mitiku | null | mitiku/AmharicWICPostag10Tags | 23 | null | transformers | 7,930 | ---
tags:
- generated_from_trainer
model-index:
- name: AmharicWICPostag10Tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicWICPostag10Tags
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
sdadas/polish-longformer-large-4096 | 1ef68e01fe87aec12e1060f307ea1829b535bab6 | 2022-03-08T18:15:18.000Z | [
"pytorch",
"longformer",
"fill-mask",
"transformers",
"license:lgpl-3.0",
"autotrain_compatible"
] | fill-mask | false | sdadas | null | sdadas/polish-longformer-large-4096 | 23 | null | transformers | 7,931 | ---
license: lgpl-3.0
---
|
ShihTing/HealthBureauSix | 04cb59a958fce6af0408d9f95922bad15b7237c7 | 2022-03-27T04:45:41.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"transformers",
"autonlp"
] | text-classification | false | ShihTing | null | ShihTing/HealthBureauSix | 23 | 1 | transformers | 7,932 | ---
tags: autonlp
language: unk
widget:
- text: "民眾來電反映:事由:護士態度惡劣,對病人大吼大叫,對於態度惡劣的人卻於與錄用,敬請相關單位改善"
- text: "民眾來電:
時間:2016年3月24號至2019年10月26號
地點:三軍總醫院 北投分院
事由:民眾表揚上述地點及時間有些醫護人員很優秀、親切、具有專業服務水準、好相處(2病房的護理師陳怡鎮、歐素玲、陳芊糖,7病房蔡閔儒,12病房林哲玄、黃仙怡,主治醫師楊蕙年)
訴求:敬請相關單位給予表揚與肯定
"
- text: "本人之先生2-3年前接受吳醫師植牙治療,本人之先生已付完植牙醫療費用,但吳醫師尚未完成本人先生之植牙,診所即關閉,導致本人先生植牙之牙體未鎖緊且不斷發炎、無法咀嚼,精神跟身體上都受到傷害,去別家牙醫診所看診也沒有醫師願意處理。後本人發現吳醫師有在XX牙醫診所(台北市)看診,本人之先生去該診所再請吳醫師協助處理原本植牙方面問題,但診所跟本人先生收取3萬5的材料費,本人認為不合理,本人已付完當初植牙費用,且是吳醫師當初未處理好,應該全權負責,現在再收取醫療費用,實在不合理。"
---
衛生局文本分類->六元
Data random_state=43
|
Alvenir/bert-punct-restoration-de | b2527bd6afd759df3fc48015f284783a75633518 | 2022-03-23T08:43:29.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | Alvenir | null | Alvenir/bert-punct-restoration-de | 23 | null | transformers | 7,933 | ---
license: apache-2.0
---
TODO |
hamedkhaledi/persain-flair-ner | 18387458cd56aecfa6d2f163eb33372d22a68ead | 2022-04-03T22:22:20.000Z | [
"pytorch",
"fa",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | hamedkhaledi | null | hamedkhaledi/persain-flair-ner | 23 | 1 | flair | 7,934 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: fa
dataset:
- NSURL-2019
widget:
- text: "آخرین مقام برجسته ژاپنی که پس از انقلاب 57 تاکنون به ایران سفر کرده است شینتارو آبه است."
---
## Persian NER in Flair
This is the universal Named-entity recognition model for Persian that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **84.03** (NSURL-2019)
Predicts NER tags:
| **tag** | **meaning** |
|:---------------------------------:|:-----------:|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| DAT | date |
| TIM | time |
| PCT | percent |
| MON | Money|
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("hamedkhaledi/persain-flair-ner")
# make example sentence
sentence = Sentence("آخرین مقام برجسته ژاپنی که پس از انقلاب 57 تاکنون به ایران سفر کرده است شینتارو آبه است.")
tagger.predict(sentence)
#print result
print(sentence.to_tagged_string())
```
This yields the following output:
```
آخرین مقام برجسته ژاپنی که پس از انقلاب 57 <B-DAT> تاکنون به ایران <B-LOC> سفر کرده است شینتارو <B-PER> آبه <I-PER> است .
```
---
### Results
- F-score (micro) 0.8403
- F-score (macro) 0.8656
- Accuracy 0.7357
```
By class:
precision recall f1-score support
LOC 0.8789 0.8589 0.8688 4083
ORG 0.8390 0.7653 0.8005 3166
PER 0.8395 0.8169 0.8280 2741
DAT 0.8648 0.7957 0.8288 1150
MON 0.9758 0.9020 0.9374 357
TIM 0.8500 0.8193 0.8344 166
PCT 0.9615 0.9615 0.9615 156
micro avg 0.8616 0.8200 0.8403 11819
macro avg 0.8871 0.8456 0.8656 11819
weighted avg 0.8613 0.8200 0.8400 11819
samples avg 0.7357 0.7357 0.7357 11819
Loss: 0.06893542408943176'
``` |
timpal0l/xlm-roberta-base-faq-extractor | 07ed39d541dab1256c385a403db198d8cbbd54cf | 2022-03-27T21:00:09.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | timpal0l | null | timpal0l/xlm-roberta-base-faq-extractor | 23 | null | transformers | 7,935 | ---
license: apache-2.0
---
# xlm-roberta-base-faq-extractor |
hackathon-pln-es/bertin-roberta-base-finetuning-esnli | 22cc774f4b3c520dd8bf9262d1f569e8a05022d8 | 2022-04-04T01:45:21.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"es",
"dataset:hackathon-pln-es/nli-es",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | hackathon-pln-es | null | hackathon-pln-es/bertin-roberta-base-finetuning-esnli | 23 | 5 | sentence-transformers | 7,936 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language:
- es
datasets:
- hackathon-pln-es/nli-es
widget:
- text: "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos."
- text: "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
- text: "Tendremos que optar por hacer una huelga para cobrar lo que queremos."
- text: "Queda descartada la huelga aunque no cobremos lo que queramos."
---
# bertin-roberta-base-finetuning-esnli
This is a [sentence-transformers](https://www.SBERT.net) model trained on a
collection of NLI tasks for Spanish. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Based around the siamese networks approach from [this paper](https://arxiv.org/pdf/1908.10084.pdf).
<!--- Describe your model here -->
You can see a demo for this model [here](https://huggingface.co/spaces/hackathon-pln-es/Sentence-Embedding-Bertin).
You can find our other model, **paraphrase-spanish-distilroberta** [here](https://huggingface.co/hackathon-pln-es/paraphrase-spanish-distilroberta) and its demo [here](https://huggingface.co/spaces/hackathon-pln-es/Paraphrase-Bertin).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Este es un ejemplo", "Cada oración es transformada"]
model = SentenceTransformer('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
model = AutoModel.from_pretrained('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
Our model was evaluated on the task of Semantic Textual Similarity using the [SemEval-2015 Task](https://alt.qcri.org/semeval2015/task2/) for [Spanish](http://alt.qcri.org/semeval2015/task2/data/uploads/sts2015-es-test.zip). We measure
| | [BETO STS](https://huggingface.co/espejelomar/sentece-embeddings-BETO) | BERTIN STS (this model) | Relative improvement |
|-------------------:|---------:|-----------:|---------------------:|
| cosine_pearson | 0.609803 | 0.683188 | +12.03 |
| cosine_spearman | 0.528776 | 0.615916 | +16.48 |
| euclidean_pearson | 0.590613 | 0.672601 | +13.88 |
| euclidean_spearman | 0.526529 | 0.611539 | +16.15 |
| manhattan_pearson | 0.589108 | 0.672040 | +14.08 |
| manhattan_spearman | 0.525910 | 0.610517 | +16.09 |
| dot_pearson | 0.544078 | 0.600517 | +10.37 |
| dot_spearman | 0.460427 | 0.521260 | +13.21 |
## Training
The model was trained with the parameters:
**Dataset**
We used a collection of datasets of Natural Language Inference as training data:
- [ESXNLI](https://raw.githubusercontent.com/artetxem/esxnli/master/esxnli.tsv), only the part in spanish
- [SNLI](https://nlp.stanford.edu/projects/snli/), automatically translated
- [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/), automatically translated
The whole dataset used is available [here](https://huggingface.co/datasets/hackathon-pln-es/nli-es).
Here we leave the trick we used to increase the amount of data for training here:
```
for row in reader:
if row['language'] == 'es':
sent1 = row['sentence1'].strip()
sent2 = row['sentence2'].strip()
add_to_samples(sent1, sent2, row['gold_label'])
add_to_samples(sent2, sent1, row['gold_label']) #Also add the opposite
```
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader`
of length 1818 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 909,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Authors
[Anibal Pérez](https://huggingface.co/Anarpego),
[Emilio Tomás Ariza](https://huggingface.co/medardodt),
[Lautaro Gesuelli](https://huggingface.co/Lgesuelli) y
[Mauricio Mazuecos](https://huggingface.co/mmazuecos).
|
MMG/xlm-roberta-base-sa-spanish | 870b8ba260b012d063b0236ab3ed7a793be0e87b | 2022-03-31T11:36:53.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | MMG | null | MMG/xlm-roberta-base-sa-spanish | 23 | null | transformers | 7,937 | Entry not found |
TropicalJuice/Dialog-PeterGriffin | eb3b26f8789df202c0a56cc0d118049069f136a4 | 2022-04-04T18:25:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | TropicalJuice | null | TropicalJuice/Dialog-PeterGriffin | 23 | null | transformers | 7,938 | ---
tags:
- conversational
---
# Peter Griffin DialoGPT Model |
dapang/distilbert-base-uncased-finetuned-toxicity | ab381c64960388e848a38ad7f299623eead1ec9a | 2022-04-05T06:08:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dapang | null | dapang/distilbert-base-uncased-finetuned-toxicity | 23 | null | transformers | 7,939 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-toxicity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-toxicity
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0086
- Accuracy: 0.999
- F1: 0.9990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.589778712669143e-05
- train_batch_size: 400
- eval_batch_size: 400
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 20 | 0.0142 | 0.998 | 0.998 |
| No log | 2.0 | 40 | 0.0112 | 0.997 | 0.9970 |
| No log | 3.0 | 60 | 0.0088 | 0.999 | 0.9990 |
| No log | 4.0 | 80 | 0.0091 | 0.998 | 0.998 |
| No log | 5.0 | 100 | 0.0086 | 0.999 | 0.9990 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.11.0
|
BramVanroy/gbert-base-finetuned-cefr | 4c69c0ef7a5311ba742a95cb3a8deb3d9cb1d73b | 2022-07-26T11:41:51.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"dataset:merlin",
"dataset:disko",
"transformers",
"cefr",
"proficiency assessment",
"written text",
"license:mit",
"model-index"
] | text-classification | false | BramVanroy | null | BramVanroy/gbert-base-finetuned-cefr | 23 | 1 | transformers | 7,940 | ---
language:
- de
license: mit
tags:
- cefr
- proficiency assessment
- written text
datasets:
- merlin
- disko
metrics:
- accuracy
- f1
- precision
- qwk
- recall
model-index:
- name: gbert-base-finetuned-cefr
results:
- task:
type: text-classification
name: CEFR proficiency prediction
metrics:
- type: accuracy
value: 0.8297872340425532
- type: f1
value: 0.831662518023171
- type: precision
value: 0.8379770347855454
- type: qwk
value: 0.9497893050032643
- type: recall
value: 0.8297872340425532
widget:
- text: "Samstag der 13. Februar Hallo ! Ich habe eine Fragen . Ich habe Probleme hören “ eu ” und “ cht ” . Wie sage ich “ also ” und “ to bake ” auf Deutsche ? Ich bin nicht gut aber ich lerne . Ich studiere Kunstgeschichte . Ich liebe Kunst und Geschichte . Mathamatik und Deutsche ich schierig aber nützlich . Es regnet heute . Die Woche ist interessant ."
- text: "Lieber . Ingo . Wie gehts es Ich will 3 Zimmer Wohnung Mieten . Ich kann nicht so viel Miete bezahlen Ich hab kein Geld . Ich muss eine wohnung Mieten . Viel Danke - Maria"
- text: "Hallo Liebe Daniela , ich möchte am Samstag um 15.00 Uhr im Schwimmbad gehen . In Stadt X ist ein neue Schwimmbad und ich möchte da gehen . _ Diese Schwimmbad ist so groß und sehr schön . Möchtest du mit mir gehen ? Weiß du dass ich liebe schwimmen , aber zusammen ist besser . Nimm bitte ein Tüch , speciall Schuhe , ein Schampoo und etwas zu trinken . Ruft mir an oder schreibt wenn möchtest du gehen mit mir . Mit freundlichen Grüße Julia"
--- |
Davlan/afro-xlmr-base | bfba0ed43d950f9a58a83064b4f0e1d17e5362e1 | 2022-04-15T14:23:42.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2204.06487",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/afro-xlmr-base | 23 | 1 | transformers | 7,941 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-base
results: []
---
# afro-xlmr-base
AfroXLMR-base was created by MLM adaptation of XLM-R-base model on 17 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Naija, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu) covering the major African language families and 3 high resource languages (Arabic, French, and English).
## Eval results on MasakhaNER (F-score)
language| XLM-R-miniLM| XLM-R-base |XLM-R-large| afro-xlmr-base | afro-xlmr-small | afro-xlmr-mini
-|-|-|-|-|-|-
amh |69.5|70.6|76.2|76.1|70.1|69.7
hau |74.5|89.5|90.5|91.2|91.4|87.7
ibo |81.9|84.8|84.1|87.4|86.6|83.5
kin |68.6|73.3|73.8|78.0|77.5|74.1
lug |64.7|79.7|81.6|82.9|83.2|77.4
luo |11.7|74.9|73.6|75.1|75.4|17.5
pcm |83.2|87.3|89.0|89.6|89.0|85.5
swa |86.3|87.4|89.4|88.6|88.7|86.0
wol |51.7|63.9|67.9|67.4|65.9|59.0
yor |72.0|78.3|78.9|82.1|81.3|75.1
### BibTeX entry and citation info
```
@misc{afro_maft,
doi = {10.48550/ARXIV.2204.06487},
url = {https://arxiv.org/abs/2204.06487},
author = {Alabi, Jesujoba O. and Adelani, David Ifeoluwa and Mosbach, Marius and Klakow, Dietrich},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multilingual Language Model Adaptive Fine-Tuning: A Study on African Languages},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
cambridgeltl/magic_flickr30k | 4f5c4ca58c36d1f413a5f5aaa40f625273a821c7 | 2022-04-13T08:56:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cambridgeltl | null | cambridgeltl/magic_flickr30k | 23 | null | transformers | 7,942 | Entry not found |
ChrisLiewJY/BERTweet-Hedge | ba5ff4bba3275436d75b0e4297b56f3cfecc4157 | 2022-04-30T10:39:56.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"transformers",
"uncertainty-detection",
"social-media",
"license:mit"
] | text-classification | false | ChrisLiewJY | null | ChrisLiewJY/BERTweet-Hedge | 23 | 0 | transformers | 7,943 | ---
license: mit
language:
- en
tags:
- uncertainty-detection
- social-media
- text-classification
widget:
- text: "It seems like Bitcoin prices are heading into bearish territory."
example_title: "Hedge Detection (Positive - Label 1)"
- text: "Bitcoin prices have fallen by 42% in the last 30 days."
example_title: "Hedge Detection (Negative - Label 0)"
---
### Overview
Fine tuned VinAI's BERTweet base model on the Wiki Weasel 2.0 Corpus from the [Szeged Uncertainty Corpus](https://rgai.inf.u-szeged.hu/node/160) for hedge (linguistic uncertainty) detection in social media texts. Model was trained and optimised using Ray Tune's implementation of Deep Mind's Population Based Training with the arithmetic mean of Accuracy & F1 as its evaluation metric.
### Labels
* LABEL_1 = Positive (Hedge is detected within text)
* LABEL_0 = Negative (No Hedges detected within text)
### <a name="models2"></a> Model Performance
Model | Accuracy | F1-Score | Accuracy & F1-Score
---|---|---|---
`BERTweet-Hedge` | 0.9680 | 0.8765 | 0.9222
|
KoichiYasuoka/roberta-base-serbian-upos | bcdb67409ecd45d042323980a6d602aa42ea258c | 2022-05-07T13:35:28.000Z | [
"pytorch",
"roberta",
"token-classification",
"sr",
"dataset:universal_dependencies",
"transformers",
"serbian",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-serbian-upos | 23 | null | transformers | 7,944 | ---
language:
- "sr"
tags:
- "serbian"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "Да има сира и масла и моја би мати знала гибати гибаницу."
- text: "Da ima sira i masla i moja bi mati znala gibati gibanicu."
---
# roberta-base-serbian-upos
## Model Description
This is a RoBERTa model in Serbian (Cyrillic and Latin) for POS-tagging and dependency-parsing, derived from [roberta-base-serbian](https://huggingface.co/KoichiYasuoka/roberta-base-serbian). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-serbian-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-serbian-upos")
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-serbian-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
Xiaoman/NER-for-female-names | 5f7578a2211ea522925e3f6adc0d9a3e3a3e1902 | 2022-05-13T11:43:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Xiaoman | null | Xiaoman/NER-for-female-names | 23 | null | transformers | 7,945 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: NER-for-female-names
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-for-female-names
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.961395091713594e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 27
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 0.6371 |
| No log | 2.0 | 10 | 0.4213 |
| No log | 3.0 | 15 | 0.3227 |
| No log | 4.0 | 20 | 0.2867 |
| No log | 5.0 | 25 | 0.2606 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
Xiaoman/NER-CoNLL2003 | 157e5cd260c04136d3e17d1a15f9247852fb7485 | 2022-05-13T11:45:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Xiaoman | null | Xiaoman/NER-CoNLL2003 | 23 | null | transformers | 7,946 | Entry not found |
malay-huggingface/wav2vec2-xls-r-300m-mixed | 0600ae9fd207d8d188c2a25e03bd1a26a291ed22 | 2022-07-02T13:33:37.000Z | [
"pytorch",
"tf",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_keras_callback",
"model-index"
] | automatic-speech-recognition | false | malay-huggingface | null | malay-huggingface/wav2vec2-xls-r-300m-mixed | 23 | 1 | transformers | 7,947 | ---
tags:
- generated_from_keras_callback
model-index:
- name: wav2vec2-xls-r-300m-mixed
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-mixed
Finetuned https://huggingface.co/facebook/wav2vec2-xls-r-300m on https://github.com/huseinzol05/malaya-speech/tree/master/data/mixed-stt
**Update 2022-07-02, https://huggingface.co/mesolitica/wav2vec2-xls-r-300m-mixed slightly better accuracy**.
This model was finetuned on 3 languages,
1. Malay
2. Singlish
3. Mandarin
**This model trained on a single Tesla V100 32GB VRAM, provided by https://keyreply.com/**.
## Evaluation set
Evaluation set from https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt with sizes,
```
len(malay), len(singlish), len(mandarin)
-> (765, 3579, 614)
```
It achieves the following results on the evaluation set based on [evaluate-wav2vec2-xls-r-300m-mixed.ipynb](evaluate-wav2vec2-xls-r-300m-mixed.ipynb):
Mixed evaluation,
```
CER: 0.048555454439612775
WER: 0.14151468058308714
CER with LM: 0.03977501945111893
WER with LM: 0.09809135311921899
```
Malay evaluation,
```
CER: 0.05372605571018908
WER: 0.23714922876687583
CER with LM: 0.03508559320616622
WER with LM: 0.1294898148329521
```
Singlish evaluation,
```
CER: 0.0488366183589853
WER: 0.1294114484378467
CER with LM: 0.04119293317615
WER with LM: 0.09411106530063
```
Mandarin evaluation,
```
CER: 0.04047435404966954
WER: 0.09291050873816364
CER with LM: 0.037352703254831865
WER with LM: 0.08217217867571727
```
Language model from https://huggingface.co/huseinzol05/language-model-bahasa-manglish-combined |
Dizzykong/gpt2-medium-commands | 92ddf481c571555d1df8b730043b7da97e200bcf | 2022-05-19T22:45:13.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | Dizzykong | null | Dizzykong/gpt2-medium-commands | 23 | null | transformers | 7,948 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-medium-commands
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-commands
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
pritam18/swadeshi_hindiwav2vec2asr | d7189be4d532299bf13a1d3d3ef5883201270ad8 | 2022-06-29T16:37:02.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | pritam18 | null | pritam18/swadeshi_hindiwav2vec2asr | 23 | null | transformers | 7,949 | swadeshi_hindiwav2vec2asr/ is a Hindi speech recognition model which is a fine tuned version of the theainerd/Wav2Vec2-large-xlsr-hindi model. The model achieved a Word Error Rate of 0.738 when trained when with 12 Hours of MUCS data with 30 epochs and given a batch size of 12. |
mehari/tig-roberta-base | 1e7f88b95b726c4db6ced7af5905597b308e4f44 | 2022-07-08T06:18:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mehari | null | mehari/tig-roberta-base | 23 | null | transformers | 7,950 | Entry not found |
FigoMe/sonnet_keyword_gen | f453014249e0c3cbca6c2e86daaa4a4cc45e3972 | 2022-05-24T23:32:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | FigoMe | null | FigoMe/sonnet_keyword_gen | 23 | null | transformers | 7,951 | Entry not found |
sbenel/emotion-distilbert | a012e44cd6c487e1e8215fd85e70c1349845cdee | 2022-07-09T16:34:13.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"emotion",
"license:apache-2.0"
] | text-classification | false | sbenel | null | sbenel/emotion-distilbert | 23 | null | transformers | 7,952 | ---
license: apache-2.0
language: en
tags:
- text-classification
- pytorch
- emotion
metrics:
- accuracy, F1 score
dataset:
- emotion
---
## Training Parameters
```
learning rate: 2e-5
epochs: 40
weight decay: 0.01
batch size: 16
```
## Metrics
```
acuraccy: 0.93
macro-F1 (macro avg): 0.88
best epoch: 15
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
|
aware-ai/robust-wav2vec2-base-german | 404de64d9f4ba27555493d5fa464c9094054f780 | 2022-05-31T13:30:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | aware-ai | null | aware-ai/robust-wav2vec2-base-german | 23 | null | transformers | 7,953 | Entry not found |
huggingtweets/botphilosophyq-philosophical_9-philosophy_life | 84659941cc7ba7ca59d36e6d7ed67410c2cee628 | 2022-05-31T12:56:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/botphilosophyq-philosophical_9-philosophy_life | 23 | null | transformers | 7,954 | ---
language: en
thumbnail: http://www.huggingtweets.com/botphilosophyq-philosophical_9-philosophy_life/1654001783159/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503378148544720896/cqXtOCzo_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1454403230218080259/l2xRKFYN_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1465751420146225152/REt6VnPb_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Philosophy Quotes & Philosophy Quotes & philosophy for life</div>
<div style="text-align: center; font-size: 14px;">@botphilosophyq-philosophical_9-philosophy_life</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Philosophy Quotes & Philosophy Quotes & philosophy for life.
| Data | Philosophy Quotes | Philosophy Quotes | philosophy for life |
| --- | --- | --- | --- |
| Tweets downloaded | 1162 | 489 | 1175 |
| Retweets | 377 | 59 | 2 |
| Short tweets | 30 | 0 | 0 |
| Tweets kept | 755 | 430 | 1173 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cvz516e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @botphilosophyq-philosophical_9-philosophy_life's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/13d841md) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/13d841md/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/botphilosophyq-philosophical_9-philosophy_life')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
arize-ai/distilbert_reviews_with_language_drift | f3997bc7d78f2d4903e1b7a444132adeb77c8b2e | 2022-06-01T06:15:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:ecommerce_reviews_with_language_drift",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | arize-ai | null | arize-ai/distilbert_reviews_with_language_drift | 23 | null | transformers | 7,955 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ecommerce_reviews_with_language_drift
metrics:
- accuracy
- f1
model-index:
- name: distilbert_reviews_with_language_drift
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ecommerce_reviews_with_language_drift
type: ecommerce_reviews_with_language_drift
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.818
- name: F1
type: f1
value: 0.8167126877417763
widget:
- text: "Poor quality of fabric and ridiculously tight at chest. It's way too short."
example_title: "Negative"
- text: "One worked perfectly, but the other one has a slight leak and we end up with water underneath the filter."
example_title: "Neutral"
- text: "I liked the price most! Nothing to dislike here!"
example_title: "Positive"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_reviews_with_language_drift
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ecommerce_reviews_with_language_drift dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4970
- Accuracy: 0.818
- F1: 0.8167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.593 | 1.0 | 500 | 0.4723 | 0.799 | 0.7976 |
| 0.3714 | 2.0 | 1000 | 0.4679 | 0.818 | 0.8177 |
| 0.2652 | 3.0 | 1500 | 0.4970 | 0.818 | 0.8167 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RUCAIBox/mvp-task-dialog | b2c5dc8fb36f4ef4d15ae085d3dc6b78d54ce896 | 2022-06-27T02:28:25.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"license:apache-2.0"
] | text2text-generation | false | RUCAIBox | null | RUCAIBox/mvp-task-dialog | 23 | 1 | transformers | 7,956 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Given the task dialog: Belief state [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example1"
- text: "Given the task dialog: Dialogue action [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example2"
- text: "Given the task dialog: System response [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example3"
---
# MVP-task-dialog
The MVP-task-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-task-dialog is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled task-oriented system datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-task-dialog is specially designed for task-oriented tasks, such as MultiWOZ.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-task-dialog")
>>> inputs = tokenizer(
... "Given the task dialog: System response [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['What date and time would you like to go?']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
vortixhead/distilbert-base-uncased-finetuned-emotion | ebf52ef7323a83dc4dd67ac9a5eb795032e39a1c | 2022-07-14T12:00:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | vortixhead | null | vortixhead/distilbert-base-uncased-finetuned-emotion | 23 | null | transformers | 7,957 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240758723346115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2140
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8278 | 1.0 | 250 | 0.3099 | 0.9055 | 0.9032 |
| 0.251 | 2.0 | 500 | 0.2140 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
eslamxm/arabert2arabert-finetuned-ar-xlsum | 955789ed12f1b24fbd6abb89d510e48238fbd49d | 2022-06-07T09:34:31.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"ar",
"arabert",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/arabert2arabert-finetuned-ar-xlsum | 23 | null | transformers | 7,958 | ---
tags:
- summarization
- ar
- encoder-decoder
- arabert
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: arabert2arabert-finetuned-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert2arabert-finetuned-ar-xlsum
This model is a fine-tuned version of [](https://huggingface.co/) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1557
- Rouge-1: 25.3
- Rouge-2: 10.46
- Rouge-l: 22.12
- Gen Len: 20.0
- Bertscore: 71.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-base-japanese-unidic-luw-upos | e750b897e815e2324a9fea5f266be88dd83ddcb4 | 2022-06-26T13:35:54.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-japanese-unidic-luw-upos | 23 | null | transformers | 7,959 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# deberta-base-japanese-unidic-luw-upos
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-base-japanese-unidic](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-unidic). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-unidic-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-japanese-unidic-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-base-japanese-unidic-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
[fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
jungealexander/distilbert-base-uncased-finetuned-go_emotions_20220608_1 | 997c80125fd925ac808ee63fc2ac0e7d1c8d58cd | 2022-06-08T20:14:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:go_emotions",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jungealexander | null | jungealexander/distilbert-base-uncased-finetuned-go_emotions_20220608_1 | 23 | null | transformers | 7,960 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-go_emotions_20220608_1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
args: simplified
metrics:
- name: F1
type: f1
value: 0.5575026333429091
- name: Accuracy
type: accuracy
value: 0.43641725027644673
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-go_emotions_20220608_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0857
- F1: 0.5575
- Roc Auc: 0.7242
- Accuracy: 0.4364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.173 | 1.0 | 679 | 0.1074 | 0.4245 | 0.6455 | 0.2976 |
| 0.0989 | 2.0 | 1358 | 0.0903 | 0.5199 | 0.6974 | 0.3972 |
| 0.0865 | 3.0 | 2037 | 0.0868 | 0.5504 | 0.7180 | 0.4263 |
| 0.0806 | 4.0 | 2716 | 0.0860 | 0.5472 | 0.7160 | 0.4233 |
| 0.0771 | 5.0 | 3395 | 0.0857 | 0.5575 | 0.7242 | 0.4364 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
binay1999/distilbert-cybertexts-preprocessed | 4b3657434eae047989152e10c1547db361b06726 | 2022-06-12T23:04:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | binay1999 | null | binay1999/distilbert-cybertexts-preprocessed | 23 | null | transformers | 7,961 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-cybertexts-preprocessed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-cybertexts-preprocessed
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4031 | 1.0 | 17824 | 3.0932 |
| 2.2404 | 2.0 | 35648 | 3.0124 |
| 2.155 | 3.0 | 53472 | 2.9901 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
cotcode/wav2vec2-finetuned-ch-emotion-edu | 50785b0b1195118ca03949cc931bf134005cc44c | 2022-06-15T18:31:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | cotcode | null | cotcode/wav2vec2-finetuned-ch-emotion-edu | 23 | null | transformers | 7,962 | Entry not found |
Elijah629/DialoGPT-mrsanai | 36746e7bfc02c351d875379b88388e94a6d948e7 | 2022-06-17T00:43:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Elijah629 | null | Elijah629/DialoGPT-mrsanai | 23 | null | transformers | 7,963 | ---
tags:
- conversational
--- |
RJuro/Da-HyggeBERT | 042b1f41ef57e80138cebca6be82ae6403be18cb | 2022-06-24T11:09:39.000Z | [
"pytorch",
"bert",
"text-classification",
"da",
"dataset:go_emotions",
"transformers",
"danish",
"sentiment",
"Maltehb/danish-bert-botxo",
"Helsinki-NLP/opus-mt-en-da",
"go-emotion",
"Certainly",
"license:cc-by-4.0"
] | text-classification | false | RJuro | null | RJuro/Da-HyggeBERT | 23 | 2 | transformers | 7,964 | ---
language: da
tags:
- danish
- bert
- sentiment
- text-classification
- Maltehb/danish-bert-botxo
- Helsinki-NLP/opus-mt-en-da
- go-emotion
- Certainly
license: cc-by-4.0
datasets:
- go_emotions
metrics:
- Accuracy
widget:
- text: "Det er så sødt af dig at tænke på andre på den måde ved du det?"
- text: "Jeg vil gerne have en playstation."
- text: "Jeg elsker dig"
- text: "Hvordan håndterer jeg min irriterende nabo?"
---
# Danish-Bert-GoÆmotion
Danish Go-Emotions classifier. [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) (uncased) finetuned on a translation of the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset using [Helsinki-NLP/opus-mt-en-da](https://huggingface.co/Helsinki-NLP/opus-mt-de-en). Thus, performance is obviousely dependent on the translation model.
## Training
- Translating the training data with MT: [Notebook](https://colab.research.google.com/github/RJuro/Da-HyggeBERT-finetuning/blob/main/HyggeBERT_translation_en_da.ipynb)
- Fine-tuning danish-bert-botxo: coming soon...
## Training Parameters:
```
Num examples = 189900
Num Epochs = 3
Train batch = 8
Eval batch = 8
Learning Rate = 3e-5
Warmup steps = 4273
Total optimization steps = 71125
```
## Loss
### Training loss

### Eval. loss
```
0.1178 (21100 examples)
```
## Using the model with `transformers`
Easiest use with `transformers` and `pipeline`:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model = AutoModelForSequenceClassification.from_pretrained('RJuro/Da-HyggeBERT')
tokenizer = AutoTokenizer.from_pretrained('RJuro/Da-HyggeBERT')
classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
classifier('jeg elsker dig')
```
`[{'label': 'kærlighed', 'score': 0.9634820818901062}]`
## Using the model with `simpletransformers`
```python
from simpletransformers.classification import MultiLabelClassificationModel
model = MultiLabelClassificationModel('bert', 'RJuro/Da-HyggeBERT')
predictions, raw_outputs = model.predict(df['text'])
``` |
autoevaluate/image-multi-class-classification | 2d124b482e1f813185e62fa5b09882ea81fcb74a | 2022-06-21T14:29:00.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:mnist",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | autoevaluate | null | autoevaluate/image-multi-class-classification | 23 | null | transformers | 7,965 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mnist
metrics:
- accuracy
model-index:
- name: image-classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: mnist
type: mnist
args: mnist
metrics:
- name: Accuracy
type: accuracy
value: 0.9833333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image-classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0556
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3743 | 1.0 | 422 | 0.0556 | 0.9833 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
danieleV9H/wav2vec2-base-ft-cv3-v3 | 6357081470022bf7d686a6b799b0510e9996e796 | 2022-07-02T08:18:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | danieleV9H | null | danieleV9H/wav2vec2-base-ft-cv3-v3 | 23 | null | transformers | 7,966 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-ft-cv3-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ft-cv3-v3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the "mozilla-foundation/common_voice_3_0 english" dataset: "train" and "validation" splits are used for training while "test" split is used for validation.
It achieves the following results on the evaluation set:
- Loss: 0.5787
- Wer: 0.2470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5935 | 0.1 | 500 | 3.0085 | 1.0 |
| 1.6296 | 0.21 | 1000 | 1.0879 | 0.5895 |
| 0.7154 | 0.31 | 1500 | 0.8224 | 0.4839 |
| 0.6387 | 0.42 | 2000 | 0.7290 | 0.4302 |
| 0.5322 | 0.52 | 2500 | 0.6864 | 0.4044 |
| 0.497 | 0.63 | 3000 | 0.6294 | 0.3746 |
| 0.4659 | 0.73 | 3500 | 0.6388 | 0.3745 |
| 0.4452 | 0.84 | 4000 | 0.6122 | 0.3570 |
| 0.4356 | 0.94 | 4500 | 0.5770 | 0.3443 |
| 0.3976 | 1.05 | 5000 | 0.6145 | 0.3296 |
| 0.3767 | 1.15 | 5500 | 0.6099 | 0.3325 |
| 0.3704 | 1.25 | 6000 | 0.5998 | 0.3263 |
| 0.3541 | 1.36 | 6500 | 0.6070 | 0.3250 |
| 0.3592 | 1.46 | 7000 | 0.6076 | 0.3352 |
| 0.3508 | 1.57 | 7500 | 0.5712 | 0.3239 |
| 0.3437 | 1.67 | 8000 | 0.5729 | 0.3202 |
| 0.352 | 1.78 | 8500 | 0.5465 | 0.3100 |
| 0.34 | 1.88 | 9000 | 0.5418 | 0.3059 |
| 0.4086 | 1.99 | 9500 | 0.5189 | 0.3053 |
| 0.2968 | 2.09 | 10000 | 0.5373 | 0.3076 |
| 0.2968 | 2.2 | 10500 | 0.5602 | 0.3061 |
| 0.2956 | 2.3 | 11000 | 0.5651 | 0.3051 |
| 0.2863 | 2.41 | 11500 | 0.5476 | 0.2982 |
| 0.2852 | 2.51 | 12000 | 0.5579 | 0.2954 |
| 0.292 | 2.61 | 12500 | 0.5451 | 0.2953 |
| 0.2877 | 2.72 | 13000 | 0.5468 | 0.2905 |
| 0.285 | 2.82 | 13500 | 0.5283 | 0.2908 |
| 0.2872 | 2.93 | 14000 | 0.5240 | 0.2867 |
| 0.3286 | 3.03 | 14500 | 0.5078 | 0.2846 |
| 0.2526 | 3.14 | 15000 | 0.5373 | 0.2836 |
| 0.2494 | 3.24 | 15500 | 0.5566 | 0.2861 |
| 0.2534 | 3.35 | 16000 | 0.5378 | 0.2859 |
| 0.2435 | 3.45 | 16500 | 0.5225 | 0.2813 |
| 0.3144 | 3.56 | 17000 | 0.5203 | 0.2808 |
| 0.2501 | 3.66 | 17500 | 0.5176 | 0.2785 |
| 0.2469 | 3.76 | 18000 | 0.5022 | 0.2795 |
| 0.242 | 3.87 | 18500 | 0.5228 | 0.2757 |
| 0.242 | 3.97 | 19000 | 0.5024 | 0.2788 |
| 0.2205 | 4.08 | 19500 | 0.5318 | 0.2729 |
| 0.2149 | 4.18 | 20000 | 0.5492 | 0.2763 |
| 0.2186 | 4.29 | 20500 | 0.5599 | 0.2769 |
| 0.2191 | 4.39 | 21000 | 0.5493 | 0.2695 |
| 0.218 | 4.5 | 21500 | 0.5385 | 0.2709 |
| 0.2046 | 4.6 | 22000 | 0.5326 | 0.2718 |
| 0.2064 | 4.71 | 22500 | 0.5591 | 0.2725 |
| 0.2066 | 4.81 | 23000 | 0.5283 | 0.2700 |
| 0.2102 | 4.92 | 23500 | 0.5456 | 0.2713 |
| 0.3345 | 5.02 | 24000 | 0.5474 | 0.2698 |
| 0.1891 | 5.12 | 24500 | 0.5466 | 0.2672 |
| 0.1954 | 5.23 | 25000 | 0.5691 | 0.2731 |
| 0.1971 | 5.33 | 25500 | 0.5595 | 0.2741 |
| 0.1995 | 5.44 | 26000 | 0.5609 | 0.2716 |
| 0.1911 | 5.54 | 26500 | 0.5513 | 0.2684 |
| 0.1954 | 5.65 | 27000 | 0.5282 | 0.2683 |
| 0.193 | 5.75 | 27500 | 0.5460 | 0.2644 |
| 0.1974 | 5.86 | 28000 | 0.5415 | 0.2650 |
| 0.1947 | 5.96 | 28500 | 0.5227 | 0.2656 |
| 0.1836 | 6.07 | 29000 | 0.5361 | 0.2743 |
| 0.1741 | 6.17 | 29500 | 0.5637 | 0.2649 |
| 0.1776 | 6.27 | 30000 | 0.5705 | 0.2680 |
| 0.1747 | 6.38 | 30500 | 0.5587 | 0.2667 |
| 0.1761 | 6.48 | 31000 | 0.5480 | 0.2683 |
| 0.1715 | 6.59 | 31500 | 0.5547 | 0.2627 |
| 0.2424 | 6.69 | 32000 | 0.5254 | 0.2610 |
| 0.1756 | 6.8 | 32500 | 0.5301 | 0.2633 |
| 0.1761 | 6.9 | 33000 | 0.5267 | 0.2658 |
| 0.1751 | 7.01 | 33500 | 0.5611 | 0.2677 |
| 0.1653 | 7.11 | 34000 | 0.5617 | 0.2663 |
| 0.1591 | 7.22 | 34500 | 0.5435 | 0.2642 |
| 0.1559 | 7.32 | 35000 | 0.5608 | 0.2611 |
| 0.1604 | 7.43 | 35500 | 0.5477 | 0.2611 |
| 0.162 | 7.53 | 36000 | 0.5257 | 0.2559 |
| 0.1579 | 7.63 | 36500 | 0.5398 | 0.2570 |
| 0.162 | 7.74 | 37000 | 0.5566 | 0.2605 |
| 0.2351 | 7.84 | 37500 | 0.5371 | 0.2564 |
| 0.1566 | 7.95 | 38000 | 0.5507 | 0.2565 |
| 0.1515 | 8.05 | 38500 | 0.5640 | 0.2544 |
| 0.1459 | 8.16 | 39000 | 0.5739 | 0.2523 |
| 0.1463 | 8.26 | 39500 | 0.5596 | 0.2522 |
| 0.1466 | 8.37 | 40000 | 0.5522 | 0.2537 |
| 0.2372 | 8.47 | 40500 | 0.5567 | 0.2520 |
| 0.1488 | 8.58 | 41000 | 0.5546 | 0.2506 |
| 0.1492 | 8.68 | 41500 | 0.5533 | 0.2518 |
| 0.1454 | 8.78 | 42000 | 0.5488 | 0.2508 |
| 0.148 | 8.89 | 42500 | 0.5635 | 0.2526 |
| 0.1424 | 8.99 | 43000 | 0.5513 | 0.2509 |
| 0.1356 | 9.1 | 43500 | 0.5534 | 0.2527 |
| 0.1346 | 9.2 | 44000 | 0.5735 | 0.2497 |
| 0.1346 | 9.31 | 44500 | 0.5710 | 0.2489 |
| 0.1401 | 9.41 | 45000 | 0.5561 | 0.2491 |
| 0.2212 | 9.52 | 45500 | 0.5564 | 0.2482 |
| 0.1369 | 9.62 | 46000 | 0.5658 | 0.2484 |
| 0.1323 | 9.73 | 46500 | 0.5582 | 0.2495 |
| 0.1369 | 9.83 | 47000 | 0.5560 | 0.2503 |
| 0.1368 | 9.94 | 47500 | 0.5552 | 0.2489 |
| 0.1333 | 10.04 | 48000 | 0.5953 | 0.2491 |
| 0.1305 | 10.14 | 48500 | 0.5818 | 0.2520 |
| 0.1316 | 10.25 | 49000 | 0.5773 | 0.2506 |
| 0.1334 | 10.35 | 49500 | 0.5882 | 0.2485 |
| 0.1351 | 10.46 | 50000 | 0.5750 | 0.2483 |
| 0.1337 | 10.56 | 50500 | 0.5910 | 0.2486 |
| 0.2241 | 10.67 | 51000 | 0.5732 | 0.2491 |
| 0.1327 | 10.77 | 51500 | 0.5839 | 0.2493 |
| 0.1364 | 10.88 | 52000 | 0.5724 | 0.2464 |
| 0.1305 | 10.98 | 52500 | 0.5758 | 0.2468 |
| 0.128 | 11.09 | 53000 | 0.5811 | 0.2482 |
| 0.1267 | 11.19 | 53500 | 0.5903 | 0.2483 |
| 0.1262 | 11.29 | 54000 | 0.5792 | 0.2483 |
| 0.1291 | 11.4 | 54500 | 0.5735 | 0.2497 |
| 0.1228 | 11.5 | 55000 | 0.5920 | 0.2494 |
| 0.1249 | 11.61 | 55500 | 0.5907 | 0.2488 |
| 0.1266 | 11.71 | 56000 | 0.5786 | 0.2486 |
| 0.1235 | 11.82 | 56500 | 0.5790 | 0.2473 |
| 0.1243 | 11.92 | 57000 | 0.5787 | 0.2470 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
webshop/il-rl-choice-bert-image_1 | 1a0f94f9ca9a153dc67b5ad617298fead3f60f67 | 2022-06-30T06:48:52.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | webshop | null | webshop/il-rl-choice-bert-image_1 | 23 | null | transformers | 7,967 | Entry not found |
alexjercan/codet5-base-masked-buggy-code-repair | 35de7413eaf2c57bd36ca0f1364b5edc51d4f8e4 | 2022-06-30T13:06:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | alexjercan | null | alexjercan/codet5-base-masked-buggy-code-repair | 23 | null | transformers | 7,968 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: codet5-base-masked-buggy-code-repair
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-base-masked-buggy-code-repair
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2876
- Precision: 0.1990
- Recall: 0.3
- F1: 0.2320
- Accuracy: 0.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
arize-ai/XLM-RoBERTa-xtreme-en-token-drift | c03c6ec259ffb6e8407b49d0d3323414eac8f7ff | 2022-07-01T01:48:49.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme_en_token_drift",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | arize-ai | null | arize-ai/XLM-RoBERTa-xtreme-en-token-drift | 23 | null | transformers | 7,969 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme_en_token_drift
metrics:
- accuracy
- f1
widget:
- text: "My name is Julia, I study at Imperial College, in London"
example_title: "Example 1"
- text: "My name is Sarah and I live in Paris"
example_title: "Example 2"
- text: "My name is Clara and I live in Berkeley, California"
example_title: "Example 3"
model-index:
- name: XLM-RoBERTa-xtreme-en-token-drift
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme_en_token_drift
type: xtreme_en_token_drift
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.908855961405927
- name: F1
type: f1
value: 0.76126567683807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-xtreme-en-token-drift
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme_en_token_drift dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2802
- Accuracy: 0.9089
- F1: 0.7613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6398 | 1.0 | 161 | 0.3421 | 0.8973 | 0.7111 |
| 0.3268 | 2.0 | 322 | 0.2846 | 0.9097 | 0.7611 |
| 0.2701 | 3.0 | 483 | 0.2802 | 0.9089 | 0.7613 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
duchung17/wav2vec2-base-timit-demo-google-colab | 7e40948aa7e77ed2fc8d447370e0ed6f4ed7d7f8 | 2022-07-05T15:24:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | duchung17 | null | duchung17/wav2vec2-base-timit-demo-google-colab | 23 | null | transformers | 7,970 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4049
- Wer: 0.3556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.7319 | 1.0 | 500 | 1.3558 | 0.8890 |
| 0.7826 | 2.01 | 1000 | 0.5655 | 0.5398 |
| 0.4157 | 3.01 | 1500 | 0.4692 | 0.4682 |
| 0.2722 | 4.02 | 2000 | 0.4285 | 0.4193 |
| 0.2094 | 5.02 | 2500 | 0.4170 | 0.3949 |
| 0.1682 | 6.02 | 3000 | 0.3895 | 0.3751 |
| 0.1295 | 7.03 | 3500 | 0.3943 | 0.3628 |
| 0.1064 | 8.03 | 4000 | 0.4198 | 0.3648 |
| 0.0869 | 9.04 | 4500 | 0.4049 | 0.3556 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_aeslc_4837 | 7c6aee9d1f927a99bdd79aaaa5e8165c188e3565 | 2022-07-07T15:03:46.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_4837 | 23 | null | transformers | 7,971 | Entry not found |
Yehor/wav2vec2-xls-r-300m-uk-with-news-lm | 50b53646bf1612173993b1e8f8395fe5a2f8a207 | 2022-07-30T07:00:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_10_0",
"transformers",
"license:cc-by-nc-sa-4.0"
] | automatic-speech-recognition | false | Yehor | null | Yehor/wav2vec2-xls-r-300m-uk-with-news-lm | 23 | null | transformers | 7,972 | ---
language:
- uk
license: "cc-by-nc-sa-4.0"
datasets:
- mozilla-foundation/common_voice_10_0
---
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model has apostrophes and hyphens.
The language model is 3-gram.
Attribution to the dataset of the language model:
- Chaplynskyi, D. et al. (2021) lang-uk Ukrainian Ubercorpus [Data set]. https://lang.org.ua/uk/corpora/#anchor4
Metrics:
| Dataset | CER | WER |
|-|-|-|
| CV7 (no LM) | 0.0432 | 0.2288 |
| CV7 (with LM) | 0.0251 | 0.118 |
| CV10 (no LM) | 0.0412 | 0.2206 |
| CV10 (with LM) | 0.023 | 0.1081 |
|
xzhang/distilgpt2-finetuned-spam | 5f3e53101bb089ed6c8af7929d72594fe8e9b0b6 | 2022-07-03T19:09:37.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | xzhang | null | xzhang/distilgpt2-finetuned-spam | 23 | null | transformers | 7,973 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-spam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-spam
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 99 | 5.3140 |
| No log | 2.0 | 198 | 5.1952 |
| No log | 3.0 | 297 | 5.1656 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ArnavL/roberta-reviews-imdb-0 | c0139a78c047a496bb58b3aed9751fa215f973d3 | 2022-07-09T19:01:24.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | ArnavL | null | ArnavL/roberta-reviews-imdb-0 | 23 | null | transformers | 7,974 | Entry not found |
dmrau/bow-bert | 5c0cce3298d0b5b84d05a5ff0186a978994ebd1a | 2022-07-12T12:50:12.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | dmrau | null | dmrau/bow-bert | 23 | null | transformers | 7,975 | ---
license: afl-3.0
---
<strong>Example on how to load and use BOW-BERT: <strong>
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# load model
model = AutoModelForSequenceClassification.from_pretrained('dmrau/bow-bert')
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
# tokenize query and passage and concatenate them
inp = tokenizer(['this is a query','query a is this'], ['this is a passage', 'passage a is this'], return_tensors='pt')
# get estimated score
print('score', model(**inp).logits[:, 1])
### outputs identical scores for different
### word orders as the model is order invariant:
# scores: [-2.9463, -2.9463]
```
<strong> Cite us:<strong>
```
@article{rau2022role,
title={The Role of Complex NLP in Transformers for Text Ranking?},
author={Rau, David and Kamps, Jaap},
journal={arXiv preprint arXiv:2207.02522},
year={2022}
}
```
|
simecek/DNADebertaBPE30k | bb160690d3ac13a6dd4a53d2448f0c9e7561442f | 2022-07-15T06:45:23.000Z | [
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/DNADebertaBPE30k | 23 | null | transformers | 7,976 | ---
tags:
- generated_from_trainer
model-index:
- name: DNADebertaBPE30k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNADebertaBPE30k
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.1519
- eval_runtime: 308.5062
- eval_samples_per_second: 337.384
- eval_steps_per_second: 21.089
- epoch: 7.22
- step: 105695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
khosseini/bert_1760_1900 | 6c8912e1c770f9d8e46aef218d42433265751678 | 2022-07-18T09:30:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | khosseini | null | khosseini/bert_1760_1900 | 23 | null | transformers | 7,977 | # Neural Language Models for Nineteenth-Century English: bert_1760_1900
## Introduction
BERT model trained on a large historical dataset of books in English, published between 1760-1900 and comprised of ~5.1 billion tokens.
- Data paper: http://doi.org/10.5334/johd.48
- Github repository: https://github.com/Living-with-machines/histLM
## License
The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.
## Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
## Dataset creators
Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
|
khosseini/bert_1850_1875 | 0a01a6b22910cd8b12d79725b3004387c58377ea | 2022-07-18T09:33:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | khosseini | null | khosseini/bert_1850_1875 | 23 | null | transformers | 7,978 | # Neural Language Models for Nineteenth-Century English: bert_1850_1875
## Introduction
BERT model trained on a large historical dataset of books in English, published between 1850-1875 and comprised of ~1.3 billion tokens.
- Data paper: http://doi.org/10.5334/johd.48
- Github repository: https://github.com/Living-with-machines/histLM
## License
The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.
## Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
## Dataset creators
Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
|
rosicast/hubert-large-ll60k-korean-zeroth-jamo | f71d54554bf42ebadc39d62b7cc25ba289a670c6 | 2022-07-25T19:40:51.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | rosicast | null | rosicast/hubert-large-ll60k-korean-zeroth-jamo | 23 | null | transformers | 7,979 | Entry not found |
google/ddpm-ema-celebahq-256 | 8b7b4bc06bd63d536e5b50a81ed73c1c7fdb2067 | 2022-07-21T15:00:38.000Z | [
"diffusers",
"arxiv:2006.11239",
"pytorch",
"unconditional-image-generation",
"license:apache-2.0"
] | unconditional-image-generation | false | google | null | google/ddpm-ema-celebahq-256 | 23 | null | diffusers | 7,980 | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-ema-celebahq-256"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm()["sample"]
# save image
image[0].save("ddpm_generated_image.png")
```
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) # <- TODO(PVP) add link
## Samples
1. 
2. 
3. 
4.  |
Muennighoff/bloom-tiny-random | a0289b14c88d36f6ad7c4595443c1c5f102a18e5 | 2022-07-21T08:44:10.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"eng",
"transformers",
"integration",
"text-generation"
] | text-generation | false | Muennighoff | null | Muennighoff/bloom-tiny-random | 23 | null | transformers | 7,981 | ---
language:
- eng
tags:
- integration
pipeline_tag: text-generation
---
# BigScience - testing model
This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests |
zhenglianchi/unAPI-train-model | 243d11f03b21be5928260bc631f28b867a5acf3d | 2022-07-22T07:09:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | zhenglianchi | null | zhenglianchi/unAPI-train-model | 23 | null | transformers | 7,982 | Entry not found |
tattle-admin/july22-xlmtwtroberta-da-multi | bafb0c6e58d99a0e23eaacd40542dd73acaba48c | 2022-07-22T08:08:52.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | tattle-admin | null | tattle-admin/july22-xlmtwtroberta-da-multi | 23 | null | transformers | 7,983 | Entry not found |
SIMAS-UN/blaming_government | ea64090a47a6b4eca351c4848619122976456d6c | 2022-07-24T03:58:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | SIMAS-UN | null | SIMAS-UN/blaming_government | 23 | null | transformers | 7,984 | Entry not found |
weijiahaha/t5-small-medicalnews-summarization | 92a45ada8f2e9d504b8dfef36755dbbb801070ac | 2022-07-27T10:00:34.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:billsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | weijiahaha | null | weijiahaha/t5-small-medicalnews-summarization | 23 | null | transformers | 7,985 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
model-index:
- name: t5-small-medicalnews-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-medicalnews-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 62 | 3.1698 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GeniusVoice/mmarco-mMiniLMv2-L4-H384-v1-distilled | becacf7b93be80205c06d637570cef17f2eb7b20 | 2022-07-27T13:42:43.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | GeniusVoice | null | GeniusVoice/mmarco-mMiniLMv2-L4-H384-v1-distilled | 23 | null | transformers | 7,986 | Entry not found |
Alaeddin/convbert-base-turkish-ner-cased | 7b931e17bb65794b696b8d761111815d38311fab | 2021-04-13T20:20:58.000Z | [
"pytorch",
"convbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Alaeddin | null | Alaeddin/convbert-base-turkish-ner-cased | 22 | null | transformers | 7,987 | |
ArBert/bert-base-uncased-finetuned-ner | 9994b81a86d4e0c1bb1f9a7c473fa1599d5261de | 2022-02-09T10:46:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/bert-base-uncased-finetuned-ner | 22 | null | transformers | 7,988 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0905
- Precision: 0.9068
- Recall: 0.9200
- F1: 0.9133
- Accuracy: 0.9787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1266 | 1.0 | 1123 | 0.0952 | 0.8939 | 0.8869 | 0.8904 | 0.9742 |
| 0.0741 | 2.0 | 2246 | 0.0866 | 0.8936 | 0.9247 | 0.9089 | 0.9774 |
| 0.0496 | 3.0 | 3369 | 0.0905 | 0.9068 | 0.9200 | 0.9133 | 0.9787 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
BSC-TeMU/roberta-base-bne-capitel-pos | 1ec726f584ea0e8a76c61e5fa53983138e1e2956 | 2021-10-21T10:29:55.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"pos",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | BSC-TeMU | null | BSC-TeMU/roberta-base-bne-capitel-pos | 22 | 3 | transformers | 7,989 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "pos"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
widget:
- text: "Festival de San Sebastián: Johnny Depp recibirá el premio Donostia en pleno rifirrafe judicial con Amber Heard"
- text: "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."
- text: "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."
- text: "El Tribunal Superior de Justicia se pronunció ayer: \"Hay base legal dentro del marco jurídico actual\"."
---
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-pos
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2).
## Evaluation and results
F1 Score: 0.9846 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BlightZz/DialoGPT-medium-Kurisu | f2f0da1675ee4091bc5f31f06adbc763b28d5a8c | 2021-07-01T22:12:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BlightZz | null | BlightZz/DialoGPT-medium-Kurisu | 22 | 1 | transformers | 7,990 | ---
tags:
- conversational
---
# A new medium model based on the character Makise Kurisu from Steins;Gate.
# Still has some issues that were present in the previous model, for example, mixing lines from other characters.
# If you have any questions, feel free to ask me on discord: BlightZz#1169 |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | fc770b520d1075a7105343806d6079fdde0a8c30 | 2021-10-18T10:13:34.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | 22 | null | transformers | 7,991 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'شلونك ؟ شخبارك ؟'
---
# CAMeLBERT-CA POS-GLF Model
## Model description
**CAMeLBERT-CA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf')
>>> text = 'شلونك ؟ شخبارك ؟'
>>> pos(text)
[{'entity': 'noun', 'score': 0.99572617, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'noun', 'score': 0.9411187, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999661, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.99286526, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.9983397, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'noun', 'score': 0.9609381, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999668, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf | b6554a2895d68987fdde3eaa4bc9857ad8c96293 | 2021-10-18T10:16:30.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf | 22 | null | transformers | 7,992 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'شلونك ؟ شخبارك ؟'
---
# CAMeLBERT-Mix POS-GLF Model
## Model description
**CAMeLBERT-Mix POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf')
>>> text = 'شلونك ؟ شخبارك ؟'
>>> pos(text)
[{'entity': 'pron_interrog', 'score': 0.82657206, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.9771731, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999568, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9977217, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.99993783, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'prep', 'score': 0.5309442, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999575, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CNT-UPenn/RoBERTa_for_seizureFrequency_QA | 2d3c49cfd9bb86df0140738823349d5863c500f5 | 2022-03-02T19:02:06.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CNT-UPenn | null | CNT-UPenn/RoBERTa_for_seizureFrequency_QA | 22 | null | transformers | 7,993 | RoBERTa-base with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing."
Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, https://doi.org/10.1093/jamia/ocac018
RoBERTa_for_seizureFrequency_QA performs extractive question answering to identify a patient's seizure freedom and/or date of last seizure using the HPI and/or Interval History paragraphs from a medical note. |
Cameron/BERT-jigsaw-identityhate | f4e3415be9e7476886fabbf2b3f0bede3ce55e9f | 2021-05-18T17:27:44.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Cameron | null | Cameron/BERT-jigsaw-identityhate | 22 | null | transformers | 7,994 | Entry not found |
Davlan/xlm-roberta-base-finetuned-swahili | cef3c7fa4f9a681d2a05df92ae8167d7353fef93 | 2021-05-28T14:12:32.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"sw",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-swahili | 22 | null | transformers | 7,995 | Hugging Face's logo
---
language: sw
datasets:
---
# xlm-roberta-base-finetuned-swahili
## Model description
**xlm-roberta-base-finetuned-swahili** is a **Swahili RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Swahili language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Swahili corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko <mask> kwamba hakuna uhalifu ulitendwa")
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Ufaransa kwamba hakuna uhalifu ulitendwa',
'score': 0.5077782273292542,
'token': 190096,
'token_str': 'Ufaransa'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
'score': 0.3657738268375397,
'token': 7270,
'token_str': 'Paris'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Gabon kwamba hakuna uhalifu ulitendwa',
'score': 0.01592041552066803,
'token': 176392,
'token_str': 'Gabon'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
'score': 0.010881908237934113,
'token': 9942,
'token_str': 'France'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Marseille kwamba hakuna uhalifu ulitendwa',
'score': 0.009554869495332241,
'token': 185918,
'token_str': 'Marseille'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | sw_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.55 | 89.46
### BibTeX entry and citation info
By David Adelani
```
```
|
Geotrend/bert-base-ro-cased | 3b7606844688c0dab16012991cc71502e88d0204 | 2021-05-18T20:08:29.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ro",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-ro-cased | 22 | null | transformers | 7,996 | ---
language: ro
datasets: wikipedia
license: apache-2.0
---
# bert-base-ro-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ro-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-ro-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Helsinki-NLP/opus-mt-en-gmq | 31425dc86abe19cea6d8cca4490aed02cd0d9260 | 2021-01-18T08:08:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-gmq | 22 | 1 | transformers | 7,997 | ---
language:
- en
- da
- nb
- sv
- is
- nn
- fo
- gmq
tags:
- translation
license: apache-2.0
---
### eng-gmq
* source group: English
* target group: North Germanic languages
* OPUS readme: [eng-gmq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md)
* model: transformer
* source language(s): eng
* target language(s): dan fao isl nno nob nob_Hebr non_Latn swe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-dan.eng.dan | 57.7 | 0.724 |
| Tatoeba-test.eng-fao.eng.fao | 9.2 | 0.322 |
| Tatoeba-test.eng-isl.eng.isl | 23.8 | 0.506 |
| Tatoeba-test.eng.multi | 52.8 | 0.688 |
| Tatoeba-test.eng-non.eng.non | 0.7 | 0.196 |
| Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.678 |
| Tatoeba-test.eng-swe.eng.swe | 57.8 | 0.717 |
### System Info:
- hf_name: eng-gmq
- source_languages: eng
- target_languages: gmq
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']
- src_constituents: {'eng'}
- tgt_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: gmq
- short_pair: en-gmq
- chrF2_score: 0.688
- bleu: 52.8
- brevity_penalty: 0.973
- ref_len: 71881.0
- src_name: English
- tgt_name: North Germanic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: gmq
- prefer_old: False
- long_pair: eng-gmq
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-et | 86b01b0a61ac4372314010e98391893b33ac8445 | 2021-09-09T21:42:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"et",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-et | 22 | null | transformers | 7,998 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-et
* source languages: es
* target languages: et
* OPUS readme: [es-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-et/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-et/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-et/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-et/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.et | 20.7 | 0.466 |
|
Helsinki-NLP/opus-mt-fj-en | 5fa8cce1063eac808d065d4cf26349ba1f145073 | 2021-09-09T21:52:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fj",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fj-en | 22 | null | transformers | 7,999 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fj-en
* source languages: fj
* target languages: en
* OPUS readme: [fj-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fj-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fj.en | 31.0 | 0.471 |
| Tatoeba.fj.en | 79.7 | 0.835 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.