modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ayameRushia/wav2vec2-large-xls-r-300m-ia | d2cddb054cb1b8f530ccaff34e0360ccc1274cf8 | 2022-03-23T18:29:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ia",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ayameRushia | null | ayameRushia/wav2vec2-large-xls-r-300m-ia | 4 | null | transformers | 18,400 | ---
language:
- ia
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-ia
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ia
metrics:
- name: Test WER using LM
type: wer
value: 8.6074
- name: Test CER using LM
type: cer
value: 2.4147
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ia
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1452
- Wer: 0.1253
## Training Procedure
Training is conducted in Google Colab, the training notebook provided in the repo
## Training and evaluation data
Language Model Created from texts from processed sentence in train + validation split of dataset (common voice 8.0 for Interlingua)
Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_ia.ipynb"
Test WER without LM
wer = 20.1776 %
cer = 4.7205 %
Test WER using
wer = 8.6074 %
cer = 2.4147 %
evaluation using eval.py
```
huggingface-cli login #login to huggingface for getting auth token to access the common voice v8
#running with LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-ia --dataset mozilla-foundation/common_voice_8_0 --config ia --split test
# running without LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-ia --dataset mozilla-foundation/common_voice_8_0 --config ia --split test --greedy
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.432 | 1.87 | 400 | 2.9636 | 1.0 |
| 2.6922 | 3.74 | 800 | 2.2111 | 0.9977 |
| 1.2581 | 5.61 | 1200 | 0.4864 | 0.4028 |
| 0.6232 | 7.48 | 1600 | 0.2807 | 0.2413 |
| 0.4479 | 9.35 | 2000 | 0.2219 | 0.1885 |
| 0.3654 | 11.21 | 2400 | 0.1886 | 0.1606 |
| 0.323 | 13.08 | 2800 | 0.1716 | 0.1444 |
| 0.2935 | 14.95 | 3200 | 0.1687 | 0.1443 |
| 0.2707 | 16.82 | 3600 | 0.1632 | 0.1382 |
| 0.2559 | 18.69 | 4000 | 0.1507 | 0.1337 |
| 0.2433 | 20.56 | 4400 | 0.1572 | 0.1358 |
| 0.2338 | 22.43 | 4800 | 0.1489 | 0.1305 |
| 0.2258 | 24.3 | 5200 | 0.1485 | 0.1278 |
| 0.2218 | 26.17 | 5600 | 0.1470 | 0.1272 |
| 0.2169 | 28.04 | 6000 | 0.1470 | 0.1270 |
| 0.2117 | 29.91 | 6400 | 0.1452 | 0.1253 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
azunre/wav2vec2large-xlsr-akan | b806137e3f6f25c3a61172d1bb9576f0cce8cc2b | 2021-07-05T22:35:12.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tw",
"dataset:common_voice",
"transformers",
"speech",
"audio"
] | automatic-speech-recognition | false | azunre | null | azunre/wav2vec2large-xlsr-akan | 4 | null | transformers | 18,401 | ---
language: tw
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
---
|
azuur/wav2vec2-base-gn-demo | 9b519d86652fc08af7eff2d7c355ea8a0db042f7 | 2022-03-24T11:57:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gn",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | azuur | null | azuur/wav2vec2-base-gn-demo | 4 | null | transformers | 18,402 | ---
license: apache-2.0
language:
- gn
tags:
- generated_from_trainer
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-base-gn-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-gn-demo
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7426
- Wer: 0.7256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 50
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 4.0 | 100 | 0.7045 | 0.7409 |
| No log | 8.0 | 200 | 0.7200 | 0.75 |
| No log | 12.0 | 300 | 0.7400 | 0.7439 |
| No log | 16.0 | 400 | 0.7677 | 0.7515 |
| 0.0846 | 20.0 | 500 | 0.7765 | 0.7271 |
| 0.0846 | 24.0 | 600 | 0.7821 | 0.7287 |
| 0.0846 | 28.0 | 700 | 0.7671 | 0.7180 |
| 0.0846 | 32.0 | 800 | 0.7594 | 0.7180 |
| 0.0846 | 36.0 | 900 | 0.7500 | 0.7165 |
| 0.0713 | 40.0 | 1000 | 0.7351 | 0.7287 |
| 0.0713 | 44.0 | 1100 | 0.7361 | 0.7241 |
| 0.0713 | 48.0 | 1200 | 0.7389 | 0.7378 |
| 0.0713 | 52.0 | 1300 | 0.7424 | 0.7210 |
| 0.0713 | 56.0 | 1400 | 0.7425 | 0.7256 |
| 0.0669 | 60.0 | 1500 | 0.7426 | 0.7256 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
baffo32/genji-python-6B-split | 85b14e26f7946fe5d892834783cba343760000ba | 2021-08-21T13:33:22.000Z | [
"gpt_neo",
"text-generation",
"en",
"dataset:the Pile",
"arxiv:2104.09864",
"transformers",
"pytorch",
"causal-lm",
"license:apache-2.0"
] | text-generation | false | baffo32 | null | baffo32/genji-python-6B-split | 4 | null | transformers | 18,403 | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the Pile
---
# Genji-python 6B
For example usage or to easily use the model you can check our colab notebook:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Model Description
Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size.
Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load.
This model needs more effort to set up as you need to install git-lfs and pull the repo.
| Hyperparameter | Value |
|-------------------|--------|
| n_parameters | 6,053,381,344 |
| n_layers | 28* |
| d_model | 4,096 |
| d_ff | 16,384 |
| n_heads | 16 |
| d_head | 256 |
| n_ctx | 2,048 |
| n_vocab | 50,400 (same tokenizer as GPT-2/3) |
| position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
`*` each layer consists of one feedforward block and one self attention block
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile.
## Training procedure
Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06
## Intended Use
This model is trained for assistence on writing python code and having fun trying weird stuff with it.
### How to use
This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.
For now, you need to use this fork:
[Fork](https://github.com/finetuneanon/transformers)
to install with pip:
```bash
pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b
```
**git-lfs** also needs to be installed, on ubuntu:
```bash
apt install git-lfs
```
after it's installed, initialize git-lfs:
```bash
git lfs install
```
then clone this repo:
```bash
git clone https://huggingface.co/NovelAI/genji-python-6B-split
```
Now we can load the model.
We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.
How to use:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GPTNeoForCausalLM,
)
model = AutoModelForCausalLM.from_pretrained("genji-python-6B-split/model").half().eval().cuda()
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
text = '''def print_customer_name'''
tokens = tokenizer(text, return_tensors="pt").input_ids
generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id)
last_tokens = generated_tokens[0][len(tokens[0]):]
generated_text = tokenizer.decode(last_tokens)
print("Generation:\n" + generated_text)
```
When ran, this code generates:
```python
Prompt:
def print_customer_name
Generation:
(self, customer):
"""Print the name of a customer."""
if not self.is_valid():
return
print("Customer: {}".format(customer))
```
For example usage, you can see our colab notebook as well:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Eval results
TBD
## Acknowledgements
This project was possible because of the compute provided by the
[TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B.
Thanks to everyone who contributed to this project:
- [Aero](https://github.com/AeroScripts)
- [Finetune](https://github.com/finetuneanon)
- [Kurumuz](https://github.com/kurumuz) |
baihaisheng/bert_finetuning_test | 1b41e044c3c2d5442cc99f01715c95f1a093999c | 2021-05-19T12:07:08.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | baihaisheng | null | baihaisheng/bert_finetuning_test | 4 | null | transformers | 18,404 | Entry not found |
banri/distilbert-base-uncased-finetuned-cola | c39594959dd1ee8951951e5cb44217978db0895f | 2021-11-13T09:52:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | banri | null | banri/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 18,405 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5258663312307151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7523
- Matthews Correlation: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.533 | 1.0 | 535 | 0.5318 | 0.3887 |
| 0.3562 | 2.0 | 1070 | 0.5145 | 0.5100 |
| 0.2429 | 3.0 | 1605 | 0.6558 | 0.4888 |
| 0.1831 | 4.0 | 2140 | 0.7523 | 0.5259 |
| 0.1352 | 5.0 | 2675 | 0.8406 | 0.5182 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
bchan007/fnctech | 8280dcd1b6066960e276b1c6c9f6d6fd3e524637 | 2022-02-17T05:25:26.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | bchan007 | null | bchan007/fnctech | 4 | null | sentence-transformers | 18,406 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# bchan007/fnctech
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bchan007/fnctech')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bchan007/fnctech')
model = AutoModel.from_pretrained('bchan007/fnctech')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=bchan007/fnctech)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
bella/bert_finetuning_test | b5400fb34c9a6869ace19376f700fe4fa8a194c4 | 2021-05-19T12:27:44.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | bella | null | bella/bert_finetuning_test | 4 | null | transformers | 18,407 | Entry not found |
benbeshara/vic_presser_bot | 6a243b264a8d358c0a43d733721bb573c8066f74 | 2021-09-13T13:06:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | benbeshara | null | benbeshara/vic_presser_bot | 4 | null | transformers | 18,408 | Entry not found |
benjamin/roberta-base-wechsel-swahili | df2c5234d2986a55545ba3c13add477a9960b76e | 2022-07-13T23:44:21.000Z | [
"pytorch",
"roberta",
"fill-mask",
"sw",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | benjamin | null | benjamin/roberta-base-wechsel-swahili | 4 | null | transformers | 18,409 | ---
language: sw
license: mit
---
# roberta-base-wechsel-swahili
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
benjaminbeilharz/distilbert-base-uncased-next-turn-classifier | b6bb858994454eff91ca261316bd4a751f71123d | 2022-02-22T15:10:17.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | benjaminbeilharz | null | benjaminbeilharz/distilbert-base-uncased-next-turn-classifier | 4 | null | transformers | 18,410 | Entry not found |
beomi/kcgpt2-dev | f4a830b8d173df81805dbff6f569b52b4c67409f | 2021-05-21T14:11:55.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | beomi | null | beomi/kcgpt2-dev | 4 | null | transformers | 18,411 | Entry not found |
bergurth/XLMR-ENIS-finetuned-ner | c09cd7b36dd7aad2977490b84abb18b8419e0a9f | 2021-10-05T21:52:34.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | bergurth | null | bergurth/XLMR-ENIS-finetuned-ner | 4 | null | transformers | 18,412 | ---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: Bónus feðgarnir Jóhannes Jónsson og Jón Ásgeir Jóhannesson opnuðu fyrstu Bónusbúðina í 400 fermetra húsnæði við Skútuvog laugardaginn 8. apríl 1989
model-index:
- name: XLMR-ENIS-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.861851332398317
- name: Recall
type: recall
value: 0.8384309266628767
- name: F1
type: f1
value: 0.849979828251974
- name: Accuracy
type: accuracy
value: 0.9830620929487668
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0938
- Precision: 0.8619
- Recall: 0.8384
- F1: 0.8500
- Accuracy: 0.9831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0574 | 1.0 | 2904 | 0.0983 | 0.8374 | 0.8061 | 0.8215 | 0.9795 |
| 0.0321 | 2.0 | 5808 | 0.0991 | 0.8525 | 0.8235 | 0.8378 | 0.9811 |
| 0.0179 | 3.0 | 8712 | 0.0938 | 0.8619 | 0.8384 | 0.8500 | 0.9831 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
berkergurcay/1k-fineutuned-bert-model | bd51e6e04225b80111b647d1d9aed178cb0cf506 | 2021-05-23T14:40:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | berkergurcay | null | berkergurcay/1k-fineutuned-bert-model | 4 | null | transformers | 18,413 | Entry not found |
berkergurcay/finetuned-roberta | 1c1dbc039e7548c114e0749add428c549fe9aef4 | 2021-06-14T12:12:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | berkergurcay | null | berkergurcay/finetuned-roberta | 4 | null | transformers | 18,414 | Entry not found |
bharat-raghunathan/Tamil-Wav2Vec-xls-r-300m-Tamil-colab | 7228019408ebbebe11d446cb57c9f1c242728c7f | 2022-02-11T04:43:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"ta",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | bharat-raghunathan | null | bharat-raghunathan/Tamil-Wav2Vec-xls-r-300m-Tamil-colab | 4 | null | transformers | 18,415 | ---
license: apache-2.0
tags:
- generated_from_trainer
- ta
- robust-speech-event
datasets:
- common_voice
model-index:
- name: Tamil-Wav2Vec-xls-r-300m-Tamil-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tamil-Wav2Vec-xls-r-300m-Tamil-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
bierus/distilbert_bookreviews | aae48b527f79b1337e6f91c0fb22d492ea26596a | 2022-01-11T23:45:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | bierus | null | bierus/distilbert_bookreviews | 4 | null | transformers | 18,416 | Entry not found |
bigscience/T0_single_prompt | 180c5ff79cfb97fcd25f178578d85ec2d9a6698f | 2022-06-21T01:27:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"arxiv:2110.08207",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | bigscience | null | bigscience/T0_single_prompt | 4 | null | transformers | 18,417 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match against Barca."
- text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
example_title: "Sentiment analysis"
- text: "Question A: How is air traffic controlled?
\nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady.
\nIn the previous sentence, decide who 'her' is referring to."
example_title: "Coreference resolution"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n
Select the category for the above sentence from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n
Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n
Do sentences 1 and 2 have the same meaning?"
example_title: "Paraphrase identification"
- text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n
The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n
(CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best."
- text: "Max: Know any good websites to buy clothes from?\n
Payton: Sure :) LINK 1, LINK 2, LINK 3\n
Max: That's a lot of them!\n
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n
Max: I'll check them out. Thanks.\n\n
Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n
Sentence A: you can leave the books on the table over there.\n
Sentence B: the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n
Which book is the leftmost book?"
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n
Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n
Who are the men running for mayor?"
example_title: "Reading comprehension"
- text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n
Which of the following best characterizes binne bams?\n
- Sentence 1: Binne bams are for pets.\n
- Sentence 2: Binne bams are typically furnished with sofas and televisions.\n
- Sentence 3: Binne bams are luxurious apartments.\n
- Sentence 4: Binne bams are places where people live."
---
**How do I pronounce the name of the model?** T0 should be pronounced "T Zero" (like in "T5 for zero-shot") and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"!
**Official repository**: [bigscience-workshop/t-zero](https://github.com/bigscience-workshop/t-zero)
# Model Description
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*.
A few other examples that you can try:
- *A is the son's of B's uncle. What is the family relationship between A and B?*
- *Question A: How is air traffic controlled?<br>
Question B: How do you become an air traffic controller?<br>
Pick one: these questions are duplicates or not duplicates.*
- *Is the word 'table' used in the same meaning in the two following sentences?<br><br>
Sentence A: you can leave the books on the table over there.<br>
Sentence B: the tables in this book are very hard to read.*
- *Max: Know any good websites to buy clothes from?<br>
Payton: Sure :) LINK 1, LINK 2, LINK 3<br>
Max: That's a lot of them!<br>
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br>
Max: I'll check them out. Thanks.<br><br>
Who or what are Payton and Max referring to when they say 'them'?*
- *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br>
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br>
Which book is the leftmost book?*
- *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.*
# How to use
We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[T0](https://huggingface.co/bigscience/T0)|11 billion|
|[T0p](https://huggingface.co/bigscience/T0p)|11 billion|
|[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion|
|[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion|
|[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion|
|[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion|
Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")
inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`.
**Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.**
# Training procedure
T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.
At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section.
Training details:
- Fine-tuning steps: 12'200
- Input sequence length: 1024
- Target sequence length: 256
- Batch size: 1'024 sequences
- Optimizer: Adafactor
- Learning rate: 1e-3
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples)
- Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions|
|T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC|
|T0_single_prompt|Same as T0 but only one prompt per training dataset|
|T0_original_task_only|Same as T0 but only original tasks templates|
|T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model|
For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page.
*: We recast Hotpot QA as closed-book QA due to long input sequence length.
# Evaluation data
We evaluate our models on a suite of held-out tasks:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI, CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Limitations
- The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html).
- We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model.
- Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text.
# Bias and fairness
Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics:
- Input: `Is the earth flat?` - Prediction: `yes`
- Input: `Do vaccines cause autism?` - Prediction: `yes`
- Input: `Complete this sentence: This man works as a` - Prediction: `Architect`
- Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny`
- Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex`
- Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault`
- Input: `what is something everyone hates, but you like?` - Prediction: `sex`
- Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex`
- Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut`
- Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy`
Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases.
To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts.
<table>
<tr>
<td>Dataset</td>
<td>Model</td>
<td>Average (Acc.)</td>
<td>Median (Acc.)</td>
</tr>
<tr>
<td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td>
</tr>
<td>T0p</td><td>57.6</td><td>83.8</td>
<tr>
</tr>
<td>T0pp</td><td>62.7</td><td>64.4</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>57.6</td><td>69.5</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>47.1</td><td>37.8</td>
<tr>
</tr>
<td>T0_3B</td><td>56.9</td><td>82.6</td>
</tr>
<tr>
<td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td>
</tr>
<td>T0p</td><td>80.1</td><td>80.6</td>
<tr>
</tr>
<td>T0pp</td><td>89.2</td><td>90.0</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>81.6</td><td>84.6</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>83.7</td><td>83.8</td>
<tr>
</tr>
<td>T0_3B</td><td>69.7</td><td>69.4</td>
</tr>
</table>
To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts.
<table>
<tr>
<td rowspan="2">Model</td>
<td rowspan="2">Subset</td>
<td colspan="3">Average (Acc.)</td>
<td colspan="3">Median (Acc.)</td>
</tr>
<tr>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
</tr>
<tr>
<td rowspan="2">T0</td><td>Type 1</td>
<td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td>
</tr>
<td>Type 2</td>
<td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0p</td>
<td>Type 1</td>
<td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td>
</tr>
</tr>
<td rowspan="2">T0pp</td>
<td>Type 1</td>
<td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td>
</tr>
</tr>
<td>Type 2</td>
<td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td>
</tr>
</tr>
<td rowspan="2">T0_single_prompt</td>
<td>Type 1</td>
<td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td>
</tr>
</tr>
<td rowspan="2">T0_original_task_only</td>
<td>Type 1</td>
<td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td>
</tr>
</tr>
<td> Type 2</td>
<td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0_3B</td>
<td>Type 1</td>
<td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td>
</tr>
</tr>
<td> Type 2</td>
<td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td>
</tr>
</table>
# BibTeX entry and citation info
```bibtex
@misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
year={2021},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
binwang/bert-large-nli-stsb | 515e64346aaec25cecc5b9f813e96754bbd8a17d | 2021-05-19T12:45:07.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | binwang | null | binwang/bert-large-nli-stsb | 4 | null | transformers | 18,418 | Entry not found |
binwang/bert-large-nli | 5fb13275756a8e49c59b72234c4350fc10ec63e1 | 2021-05-19T12:47:28.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | binwang | null | binwang/bert-large-nli | 4 | null | transformers | 18,419 | Entry not found |
birgermoell/ner-swedish-wikiann | b91f5df794b983a6a536ffec62ac5ea20f0daacf | 2021-08-17T15:28:47.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:wikiann",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | birgermoell | null | birgermoell/ner-swedish-wikiann | 4 | null | transformers | 18,420 | ---
license: apache-2.0
tags:
- token-classification
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-swedish-wikiann
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
metrics:
- name: Precision
type: precision
value: 0.8331921416757433
- name: Recall
type: recall
value: 0.84243586083126
- name: F1
type: f1
value: 0.8377885044416501
- name: Accuracy
type: accuracy
value: 0.91930707459758
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-swedish-wikiann
This model is a fine-tuned version of [nordic-roberta-wiki](hhttps://huggingface.co/flax-community/nordic-roberta-wiki) trained for NER on the wikiann dataset.
eval F1-Score: **83,78**
test F1-Score: **83,76**
## Model Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("birgermoell/ner-swedish-wikiann")
model = AutoModelForTokenClassification.from_pretrained("birgermoell/ner-swedish-wikiann")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jag heter Per och jag jobbar på KTH"
nlp(example)
```
<!--
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.9086903597787154e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
It achieves the following results on the evaluation set:
- Loss: 0.3156
- Precision: 0.8332
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("birgermoell/ner-swedish-wikiann")
model = AutoModelForTokenClassification.from_pretrained("birgermoell/ner-swedish-wikiann")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jag heter Per och jag jobbar på KTH"
nlp(example)
- F1: 0.8378
- Accuracy: 0.9193
It achieves the following results on the test set:
- Loss: 0.3023
- Precision: 0.8301
- Recall: 0.8452
- F1: 0.8376
- Accuracy: 0.92
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.6.2
- Tokenizers 0.10.2
-->
|
birgermoell/wav2vec2-swedish-common-voice | 8c5d63a36537a16d79880579f6a5481e0c227523 | 2021-07-05T23:29:12.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"sv",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/wav2vec2-swedish-common-voice | 4 | 1 | transformers | 18,421 | ---
language: sv
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Swedish by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sv-SE
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 36.91
---
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice). The training data amounts to 402 MB.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 36.91 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1KkD4PeZwnIwxxxOP1bUE7XTZMK7-SzRj?usp=sharing)
|
bitmorse/autonlp-ks-530615016 | 7738f51f039dccbd9f152f170305d47e758ac48a | 2022-01-26T11:40:24.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:bitmorse/autonlp-data-ks",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | bitmorse | null | bitmorse/autonlp-ks-530615016 | 4 | null | transformers | 18,422 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bitmorse/autonlp-data-ks
co2_eq_emissions: 2.2247356264808964
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 530615016
- CO2 Emissions (in grams): 2.2247356264808964
## Validation Metrics
- Loss: 0.7859578132629395
- Accuracy: 0.676854818831649
- Macro F1: 0.3297126297995653
- Micro F1: 0.676854818831649
- Weighted F1: 0.6429522696884535
- Macro Precision: 0.33152557743856437
- Micro Precision: 0.676854818831649
- Weighted Precision: 0.6276125515413322
- Macro Recall: 0.33784302289888885
- Micro Recall: 0.676854818831649
- Weighted Recall: 0.676854818831649
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bitmorse/autonlp-ks-530615016
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
bitsanlp/distilbert-base-uncased-finetuned-emotion | 84433818953c27d879703e9db16b08832600ea8b | 2022-02-08T17:57:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | bitsanlp | null | bitsanlp/distilbert-base-uncased-finetuned-emotion | 4 | null | transformers | 18,423 | Entry not found |
biu-nlp/cdlm | c1058695d788d1e76eeddd3c6d434d110cb6164b | 2021-10-17T12:24:59.000Z | [
"pytorch",
"longformer",
"fill-mask",
"en",
"arxiv:2101.00406",
"transformers",
"cdlm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | biu-nlp | null | biu-nlp/cdlm | 4 | null | transformers | 18,424 | ---
language: en
tags:
- longformer
- cdlm
license: apache-2.0
inference: false
---
# Cross-Document Language Modeling
CDLM: Cross-Document Language Modeling.
Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew E Peters, Arie Cattan and Ido Dagan. In EMNLP Findings, 2021. [PDF](https://arxiv.org/pdf/2101.00406.pdf)
Please note that during our pretraining we used the document and sentence separators, which you might want to add to your data. The document and sentence separators are `<doc-s>`, `</doc-s>` (the last two tokens in the vocabulary), and `<s>`, `</s>`, respectively.
```python
from transformers import AutoTokenizer, AutoModel
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('biu-nlp/cdlm')
model = AutoModel.from_pretrained('biu-nlp/cdlm')
```
The original repo is [here](https://github.com/aviclu/CDLM).
If you find our work useful, please cite the paper as:
```python
@article{caciularu2021cross,
title={Cross-Document Language Modeling},
author={Caciularu, Avi and Cohan, Arman and Beltagy, Iz and Peters, Matthew E and Cattan, Arie and Dagan, Ido},
journal={Findings of the Association for Computational Linguistics: EMNLP 2021},
year={2021}
}
``` |
boronbrown48/topic_generalFromOther_v1 | 8a90053ab7e4ecb4ad7b8471f594016ecf35521b | 2021-11-24T17:04:05.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | boronbrown48 | null | boronbrown48/topic_generalFromOther_v1 | 4 | null | transformers | 18,425 | Entry not found |
boronbrown48/wangchanberta-sentiment-504-v4 | 5e0db6fcbde4b893be86d4f2c3ce94c79c3e160c | 2021-11-25T04:33:20.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | boronbrown48 | null | boronbrown48/wangchanberta-sentiment-504-v4 | 4 | null | transformers | 18,426 | Entry not found |
boronbrown48/wangchanberta-sentiment-v2 | e6fe50e0166d90e3bffea7387746ca19de82b7af | 2021-11-24T03:15:21.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | boronbrown48 | null | boronbrown48/wangchanberta-sentiment-v2 | 4 | null | transformers | 18,427 | Entry not found |
boychaboy/MNLI_bert-base-cased | 69f46a848055e8c3fa771e0aa6a837bab0f39663 | 2021-05-19T13:10:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/MNLI_bert-base-cased | 4 | null | transformers | 18,428 | Entry not found |
boychaboy/MNLI_bert-large-uncased | c233b35535b5beb57b6589e8dab8fec106cc3173 | 2021-05-19T13:22:28.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/MNLI_bert-large-uncased | 4 | null | transformers | 18,429 | Entry not found |
boychaboy/MNLI_distilbert-base-cased | 4ab1d9c72f5dc83a58d95cda2c5f08f71a8c7cbf | 2021-05-10T17:20:24.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/MNLI_distilbert-base-cased | 4 | null | transformers | 18,430 | Entry not found |
boychaboy/MNLI_distilbert-base-cased_2 | 5998fa63837d7e4a9fd965ccaf6633f655a0ae67 | 2021-05-13T16:35:57.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/MNLI_distilbert-base-cased_2 | 4 | null | transformers | 18,431 | Entry not found |
boychaboy/MNLI_distilbert-base-uncased | 0515fe0ec1bbbfbc7ce9d3f1a6cefc6da3dbf292 | 2021-05-15T06:47:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/MNLI_distilbert-base-uncased | 4 | null | transformers | 18,432 | Entry not found |
boychaboy/SNLI_bert-large-cased | de4a97b902406e8db1d125bdf22e21d37ce34231 | 2021-05-19T13:27:09.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/SNLI_bert-large-cased | 4 | null | transformers | 18,433 | Entry not found |
boychaboy/kobias_klue-roberta-small | 1f3abdc67cfafe377229dcbf7885eac02ccfe5f6 | 2021-07-07T05:33:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | boychaboy | null | boychaboy/kobias_klue-roberta-small | 4 | null | transformers | 18,434 | Entry not found |
briverse/vi-electra-large-cased | 61732577d8283a2d4370e9972d40800228c6df97 | 2021-02-04T15:27:17.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | briverse | null | briverse/vi-electra-large-cased | 4 | null | transformers | 18,435 | Entry not found |
briverse/vi-electra-large-uncased | 645b2efc1849d2fa92bd6b58d95668024ab4bd1d | 2021-02-04T15:23:18.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | briverse | null | briverse/vi-electra-large-uncased | 4 | null | transformers | 18,436 | Entry not found |
bstad/a-different-bert-model | d92111a3b75af9ea69905426a6080399234a6f30 | 2021-12-28T01:58:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | bstad | null | bstad/a-different-bert-model | 4 | null | transformers | 18,437 | Entry not found |
bullmount/xlm-roberta-base-finetuned-panx-it | 4e381b55fb918194fabf9ee7a66bfbea575e8242 | 2022-02-27T08:04:14.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | bullmount | null | bullmount/xlm-roberta-base-finetuned-panx-it | 4 | null | transformers | 18,438 | ---
license: mit
widget:
- text: "Luigi è nato a Roma."
- text: "Antonio ha chiesto ad Alessia di recarsi alla sede INAIL."
---
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.9097618003799502
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1417
- F1: 0.9098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2754 | 1.0 | 834 | 0.1683 | 0.8717 |
| 0.1366 | 2.0 | 1668 | 0.1449 | 0.8921 |
| 0.0863 | 3.0 | 2502 | 0.1417 | 0.9098 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
byeongal/kobart | 7f0f2c8f5adcad7b0c448eda302d6dc917bd4903 | 2021-06-22T08:29:48.000Z | [
"pytorch",
"bart",
"feature-extraction",
"ko",
"transformers",
"license:mit"
] | feature-extraction | false | byeongal | null | byeongal/kobart | 4 | null | transformers | 18,439 | ---
license: mit
language: ko
tags:
- bart
---
# kobart model for Teachable NLP
- This model forked from [kobart](https://huggingface.co/hyunwoongko/kobart) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp).
|
cahya/wav2vec2-large-xlsr-indonesian | fe66c9f1114e958d0c08de5dcc7e82bb8001d4a1 | 2021-07-05T23:55:41.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-indonesian | 4 | null | transformers | 18,440 | ---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian by cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 25.86
---
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.86 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
(will be available soon)
|
caioamb/bert-base-uncased-finetuned-md | 0e6fa53e3900c0c4e430f74cc43d655c61637927 | 2021-12-28T01:22:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | caioamb | null | caioamb/bert-base-uncased-finetuned-md | 4 | null | transformers | 18,441 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-md
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-md
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2415 | 1.0 | 1044 | 0.2084 |
| 0.1244 | 2.0 | 2088 | 0.2903 |
| 0.0427 | 3.0 | 3132 | 0.3329 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
caixin1998/chinese-poetry-gpt2 | a685e7fef12296ff97cced7ef812f693f8e5372c | 2021-05-21T14:43:50.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | caixin1998 | null | caixin1998/chinese-poetry-gpt2 | 4 | null | transformers | 18,442 | Entry not found |
camille/bert-base-pruned-voc-esw0.5-40000-en-de-cased | c559af042394daa9499a0d2fab19e190152b9195 | 2021-05-19T13:52:49.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | camille | null | camille/bert-base-pruned-voc-esw0.5-40000-en-de-cased | 4 | null | transformers | 18,443 | Entry not found |
camille/bert-base-pruned-voc-esw0.9-40000-en-fr-cased | 0da9a3c4c57e5b2eb11a4aecd18684ce2118f050 | 2021-05-19T13:57:46.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | camille | null | camille/bert-base-pruned-voc-esw0.9-40000-en-fr-cased | 4 | null | transformers | 18,444 | Entry not found |
canwenxu/ssr-base | f679f3818597c99ea5e46515084c2770d5ed1677 | 2021-11-17T05:03:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | canwenxu | null | canwenxu/ssr-base | 4 | null | transformers | 18,445 | Entry not found |
caps1994/DialoGPT-small-harrypotter-caps1994 | adccfbeefa00c4eb642d628fcde2050ed26974dd | 2021-09-03T05:04:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | caps1994 | null | caps1994/DialoGPT-small-harrypotter-caps1994 | 4 | null | transformers | 18,446 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
carlosaguayo/pegasus-samsum | 0dd089bbc5eb492b8c945ba55c2cc2f3147836cd | 2022-01-27T06:14:31.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | carlosaguayo | null | carlosaguayo/pegasus-samsum | 4 | null | transformers | 18,447 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7197 | 0.54 | 500 | 1.4842 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
cartyparty/DialoGPT-small-harrypotter | 044b77b472aaf7b578f798d60f44c8611fcf2577 | 2021-08-30T03:22:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cartyparty | null | cartyparty/DialoGPT-small-harrypotter | 4 | null | transformers | 18,448 | ---
tags:
- conversational
---
# Harry Potter Bot |
cataremix15/distilbert-tiln-proj | 8e2dff8fbb117033c9e5deb9279559f374fc97bc | 2021-05-17T19:13:00.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | cataremix15 | null | cataremix15/distilbert-tiln-proj | 4 | null | transformers | 18,449 | Entry not found |
catpotat/vinagpt2-alpha | ed341d5ed56368af11a3946049a0b6f25e85b9a6 | 2021-05-21T14:46:01.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | catpotat | null | catpotat/vinagpt2-alpha | 4 | null | transformers | 18,450 | Entry not found |
ceyda/wav2vec2-base-760 | cce2bc550fc117377ac89e136d6c92848fcff95b | 2021-07-06T00:16:35.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"transformers"
] | feature-extraction | false | ceyda | null | ceyda/wav2vec2-base-760 | 4 | null | transformers | 18,451 | Pretrained on 720h~ of Turkish speech data
TBA |
chaitanya97/wav2vec2-large-xls-r-3 | 208981428a882b42c9f78621f9174f8c438660e8 | 2022-02-16T16:03:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chaitanya97 | null | chaitanya97/wav2vec2-large-xls-r-3 | 4 | null | transformers | 18,452 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
chaitanya97/wav2vec2-large-xls-r-300m-hindi-colab | d1bae4ec831c618079e5b4e3157415ea81d9804d | 2022-02-16T11:24:11.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chaitanya97 | null | chaitanya97/wav2vec2-large-xls-r-300m-hindi-colab | 4 | null | transformers | 18,453 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2810
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 23.4144 | 0.8 | 4 | 29.5895 | 1.0 |
| 19.1336 | 1.6 | 8 | 18.3354 | 1.0 |
| 12.1562 | 2.4 | 12 | 11.2065 | 1.0 |
| 8.1523 | 3.2 | 16 | 8.8674 | 1.0 |
| 6.807 | 4.0 | 20 | 7.8106 | 1.0 |
| 6.1583 | 4.8 | 24 | 7.2810 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
charsiu/en_w2v2_fs_32k | 7ce02c89bf5f97034f42d2faae48e8a3b7543dc7 | 2021-10-04T15:19:14.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | charsiu | null | charsiu/en_w2v2_fs_32k | 4 | null | transformers | 18,454 | Entry not found |
chinhon/pegasus-multi_news-summarizer_01 | 3e49962b87c1b13d0a87a683ed889170058799af | 2021-11-06T21:31:47.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chinhon | null | chinhon/pegasus-multi_news-summarizer_01 | 4 | null | transformers | 18,455 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-multi_news-summarizer_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-multi_news-summarizer_01
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2794
- Rouge1: 52.1693
- Rouge2: 34.8989
- Rougel: 41.2385
- Rougelsum: 48.4365
- Gen Len: 98.6433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.3936 | 1.0 | 16113 | 1.2972 | 51.5747 | 34.2062 | 40.7279 | 47.7783 | 95.0004 |
| 1.3664 | 2.0 | 32226 | 1.2817 | 52.1077 | 34.8189 | 41.1614 | 48.3894 | 100.3265 |
| 1.3002 | 3.0 | 48339 | 1.2794 | 52.1693 | 34.8989 | 41.2385 | 48.4365 | 98.6433 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
chisadi/nice-distilbert | 7e24ea6d29ee533411c863ae8f84579bee0c58ee | 2021-11-01T17:53:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | chisadi | null | chisadi/nice-distilbert | 4 | null | transformers | 18,456 | Entry not found |
chmanoj/xls-r-2B-te | 36072e6d18929d54c954b8f559ec75120e9574fe | 2022-03-24T11:55:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"te",
"dataset:openslr",
"dataset:SLR66",
"transformers",
"openslr_SLR66",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chmanoj | null | chmanoj/xls-r-2B-te | 4 | null | transformers | 18,457 | ---
language:
- te
license: apache-2.0
tags:
- automatic-speech-recognition
- openslr_SLR66
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- openslr
- SLR66
metrics:
- wer
model-index:
- name: xls-r-1B-te
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: openslr
name: Open SLR
args: SLR66
metrics:
- type: wer
value: 0.51
name: Test WER
- type: cer
value: 0.097
name: Test CER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the OPENSLR_SLR66 - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4253
- Wer: 0.5109
### Evaluation metrics
| Metric | Split | Decode with LM | Value |
|:------:|:------:|:--------------:|:---------:|
| WER | Train | No | |
| CER | Train | No | |
| WER | Test | No | |
| CER | Test | No | |
| WER | Train | Yes | |
| CER | Train | Yes | |
| WER | Test | Yes | |
| CER | Test | Yes | |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- learning_rate: 3e-6
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 150.0
- hidden_dropout: 0.15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
chrommium/bert-base-multilingual-cased-finetuned-news-headlines | 6f673c897ccecffdffb9c235e115e6d9adb40981 | 2021-08-17T15:46:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | chrommium | null | chrommium/bert-base-multilingual-cased-finetuned-news-headlines | 4 | null | transformers | 18,458 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model_index:
- name: bert-base-multilingual-cased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
metric:
name: Accuracy
type: accuracy
value: 0.9755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-cola
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1729
- Accuracy: 0.9755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5119 | 1.0 | 625 | 0.2386 | 0.922 |
| 0.2536 | 2.0 | 1250 | 0.2055 | 0.949 |
| 0.1718 | 3.0 | 1875 | 0.1733 | 0.969 |
| 0.0562 | 4.0 | 2500 | 0.1661 | 0.974 |
| 0.0265 | 5.0 | 3125 | 0.1729 | 0.9755 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
chrommium/sbert_large-finetuned-sent_in_news_sents_3lab | 1bacf24521a9546ff5d205a3248cd86c746dee57 | 2021-10-11T13:29:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | chrommium | null | chrommium/sbert_large-finetuned-sent_in_news_sents_3lab | 4 | null | transformers | 18,459 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sbert_large-finetuned-sent_in_news_sents_3lab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents_3lab
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9443
- Accuracy: 0.8580
- F1: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 264 | 0.6137 | 0.8608 | 0.3084 |
| 0.524 | 2.0 | 528 | 0.6563 | 0.8722 | 0.4861 |
| 0.524 | 3.0 | 792 | 0.7110 | 0.8494 | 0.4687 |
| 0.2225 | 4.0 | 1056 | 0.7323 | 0.8608 | 0.6015 |
| 0.2225 | 5.0 | 1320 | 0.9604 | 0.8551 | 0.6185 |
| 0.1037 | 6.0 | 1584 | 0.8801 | 0.8523 | 0.5535 |
| 0.1037 | 7.0 | 1848 | 0.9443 | 0.8580 | 0.6199 |
| 0.0479 | 8.0 | 2112 | 1.0048 | 0.8608 | 0.6168 |
| 0.0479 | 9.0 | 2376 | 0.9757 | 0.8551 | 0.6097 |
| 0.0353 | 10.0 | 2640 | 1.0743 | 0.8580 | 0.6071 |
| 0.0353 | 11.0 | 2904 | 1.1216 | 0.8580 | 0.6011 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
chujiezheng/blenderbot_small-90M-ESC | cf25a85f9841fa9e6e5af48f4dbabcdde87e40ef | 2021-08-09T02:13:58.000Z | [
"pytorch",
"blenderbot-small",
"text2text-generation",
"arxiv:2106.01144",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | chujiezheng | null | chujiezheng/blenderbot_small-90M-ESC | 4 | null | transformers | 18,460 | [blenderbot_small-90M](https://huggingface.co/facebook/blenderbot_small-90M) fine-tuned on [Emotional Support Conversation](https://arxiv.org/pdf/2106.01144.pdf) dataset |
clairesb/kindness_bot | 4779fb04017237bceccf7d86b5ba346bf0776415 | 2021-10-26T00:09:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | clairesb | null | clairesb/kindness_bot | 4 | null | transformers | 18,461 | ---
tags:
- conversational
---
# A somewhat positive chatbot |
conversify/response-score | 06f46a3b79d9f8ed28c60d80d186f8aef18bfff2 | 2021-05-19T14:25:00.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | conversify | null | conversify/response-score | 4 | null | transformers | 18,462 | hello
|
cstorm125/wangchanberta-base-wiki-20210520-news-spm-finetune-qa | dcb64bbfb720e13b1a8618ccd39d7080c54757ba | 2021-07-14T07:35:27.000Z | [
"pytorch",
"camembert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | cstorm125 | null | cstorm125/wangchanberta-base-wiki-20210520-news-spm-finetune-qa | 4 | null | transformers | 18,463 | ---
widget:
- text: "สวนกุหลาบเป็นโรงเรียนอะไร"
context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย"
---
# wangchanberta-base-wiki-20210520-news-spm-finetune-qa
Finetuning `airesearchth/wangchanberta-base-wiki-20210520-news-spm` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearchth/wangchanberta-base-wiki-20210520-news-spm
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--model_max_length 400 \
--pad_on_right \
--fp16
```
|
cstorm125/wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa | 4d2b4a3eb184924dc5c6483e032f4516487beee8 | 2021-07-14T07:41:41.000Z | [
"pytorch",
"camembert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | cstorm125 | null | cstorm125/wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa | 4 | null | transformers | 18,464 | ---
widget:
- text: "สวนกุหลาบเป็นโรงเรียนอะไร"
context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย"
---
# wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa
Finetuning `airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--model_max_length 400 \
--pad_on_right \
--fp16 \
--use_auth_token
``` |
cyl/adapter_t5-3b_qqp | f99c7c898659f1545399a8de456a738d1bc3b3ef | 2022-02-15T08:49:43.000Z | [
"pytorch",
"transformers"
] | null | false | cyl | null | cyl/adapter_t5-3b_qqp | 4 | null | transformers | 18,465 | Entry not found |
d4niel92/xlm-roberta-base-finetuned-marc-en | 6b51060bb3d54ad6e11a7510c461d3c47c57bd5c | 2021-10-22T12:58:11.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | d4niel92 | null | d4niel92/xlm-roberta-base-finetuned-marc-en | 4 | null | transformers | 18,466 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8976
- Mae: 0.4268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.092 | 1.0 | 235 | 0.9514 | 0.5122 |
| 0.9509 | 2.0 | 470 | 0.8976 | 0.4268 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
d8oss/giw-medium | c33f2a38def59edb0cbb560389062409543d8c95 | 2021-09-14T11:04:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | d8oss | null | d8oss/giw-medium | 4 | null | transformers | 18,467 | Entry not found |
damien-ir/kosentelectra-discriminator-v2 | 4f78e95e9d1df545b19140b4cff86fa628eb5686 | 2020-09-15T09:10:42.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | damien-ir | null | damien-ir/kosentelectra-discriminator-v2 | 4 | null | transformers | 18,468 | Entry not found |
damien-ir/kosentelectra-discriminator-v5 | 9f207f08f1f5ca601c0c20a46c5ebb900ca5dc10 | 2020-09-29T08:00:43.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | damien-ir | null | damien-ir/kosentelectra-discriminator-v5 | 4 | null | transformers | 18,469 | Entry not found |
damien-ir/kosentelectra-generator-v3 | d859ccc36fbed4754f4aa4faba5a890864dd0005 | 2020-09-29T07:45:16.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | damien-ir | null | damien-ir/kosentelectra-generator-v3 | 4 | null | transformers | 18,470 | Entry not found |
damlab/HIV_PR_resist | cc42f68559f96798b37e2df38684a40a681d3588 | 2022-02-24T20:28:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | damlab | null | damlab/HIV_PR_resist | 4 | null | transformers | 18,471 |
---
license: mit
---
# HIV_PR_resist model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Protease-Resistance model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict whether an HIV protease sequence will be resistant to certain protease inhibitors. HIV-BERT is a model refined from the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV protease sequences from the [Stanford HIV Genotype-Phenotype Database](https://hivdb.stanford.edu/pages/genotype-phenotype.html), allowing even more precise prediction protease inhibitor resistance than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Protease-Resistance model is intended to predict the likelihood that an HIV protease sequence will be resistant to protease inhibitors. The protease gene is responsible for cleaving viral proteins into their active states, and as such is an ideal target for antiretroviral therapy. Annotation programs designed to predict and identify protease resistance using known mutations already exist, however with varied results. The HIV-BERT-Protease-Resistance model is designed to provide an alternative, NLP-based mechanism for predicting resistance mutations when provided with an HIV protease sequence.
## Intended Uses & Limitations
This tool can be used as a predictor of protease resistance mutations within an HIV genomic sequence. It should not be considered a clinical diagnostic tool.
## How to use
*Prediction example of protease sequences*
## Training Data
This model was trained using the [damlab/HIV-PI dataset](https://huggingface.co/datasets/damlab/HIV_PI) using the 0th fold. The dataset consists of 1959 sequences (approximately 99 tokens each) extracted from the Stanford HIV Genotype-Phenotype Database.
## Training Procedure
### Preprocessing
As with the [rostlab/Prot-bert-bfd model](https://huggingface.co/Rostlab/prot_bert_bfd), the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The [damlab/HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be resistant to multiple drugs) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed] |
danasone/testpush | 208ef916a250fc367a778cc74a50e8ee5dffa5e8 | 2022-01-01T20:37:59.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | danasone | null | danasone/testpush | 4 | null | transformers | 18,472 | Entry not found |
danurahul/alex-gpt2000 | 2ac6d247343f1ca408ead1e8cc17fdcd805cd769 | 2021-05-21T15:17:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/alex-gpt2000 | 4 | null | transformers | 18,473 | Entry not found |
danyaljj/opengpt2_pytorch_forward | 76cc4809c41dd44fbcbf2f57ee7a65fe32f07b18 | 2021-06-16T20:30:01.000Z | [
"pytorch",
"transformers"
] | null | false | danyaljj | null | danyaljj/opengpt2_pytorch_forward | 4 | null | transformers | 18,474 | West et al.'s model from their "reflective decoding" paper.
Sample usage:
```python
import torch
from modeling_opengpt2 import OpenGPT2LMHeadModel
from padded_encoder import Encoder
path_to_forward = 'danyaljj/opengpt2_pytorch_forward'
encoder = Encoder()
model_backward = OpenGPT2LMHeadModel.from_pretrained(path_to_forward)
input = "She tried to win but"
input_ids = encoder.encode(input)
input_ids = torch.tensor([input_ids ], dtype=torch.int)
print(input_ids)
output = model_backward.generate(input_ids)
output_text = encoder.decode(output.tolist()[0])
print(output_text)
```
Download the additional files from here: https://github.com/peterwestuw/GPT2ForwardBackward
|
darkzara/results | 674575a7f6720d418a20651d1b83aca26c917144 | 2022-01-18T14:32:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | darkzara | null | darkzara/results | 4 | null | transformers | 18,475 | Entry not found |
darubramha/hi-LyricsGPT2 | 42d1364b1c0ac5da2306f25334ecdc29d9931ea8 | 2021-06-05T21:48:55.000Z | [
"pytorch"
] | null | false | darubramha | null | darubramha/hi-LyricsGPT2 | 4 | null | null | 18,476 | Hi
|
daveripper0020/essaygpt2 | 6566589d48924e1815242793c0bc8e2ff704159c | 2021-10-13T17:23:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | daveripper0020 | null | daveripper0020/essaygpt2 | 4 | null | transformers | 18,477 | Entry not found |
dbernsohn/roberta-go | 807ad5e844c18b972ece57c3126eaff0995ba4b5 | 2021-05-20T15:53:19.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"Go",
"dataset:code_search_net",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dbernsohn | null | dbernsohn/roberta-go | 4 | null | transformers | 18,478 | # roberta-go
---
language: Go
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Golang** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-go")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-go")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Java code.
```python
code = """
package main
import (
"fmt"
"runtime"
)
func main() {
fmt.Print("Go runs on ")
switch os := runtime.<mask>; os {
case "darwin":
fmt.Println("OS X.")
case "linux":
fmt.Println("Linux.")
default:
// freebsd, openbsd,
// plan9, windows...
fmt.Printf("%s.\n", os)
}
}
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
[('GOOS', 0.11810332536697388),
('FileInfo', 0.04276798665523529),
('Stdout', 0.03572738170623779),
('Getenv', 0.025064032524824142),
('FileMode', 0.01462600938975811)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/) |
dbmdz/bert-mini-historic-multilingual-cased | 5062c99aca557f52a95f107f588dbb151c4c0ec3 | 2021-12-06T14:24:48.000Z | [
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/bert-mini-historic-multilingual-cased | 4 | null | transformers | 18,479 | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/electra-base-turkish-cased-v0-discriminator | 6604b2635a2cadb5e83025612d444bd62667855c | 2020-04-24T15:57:20.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | dbmdz | null | dbmdz/electra-base-turkish-cased-v0-discriminator | 4 | null | transformers | 18,480 | Entry not found |
debatelab/cript-large | d4a616673f7d7c3fe9528b3597f64ada9eefb7cf | 2021-05-21T15:31:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"arxiv:2009.07185",
"transformers"
] | text-generation | false | debatelab | null | debatelab/cript-large | 4 | null | transformers | 18,481 | ---
language: en
tags:
- gpt2
---
# CRiPT Model Large (Critical Thinking Intermediarily Pretrained Transformer)
Large version of the trained model (`SYL01-2020-10-24-72K/gpt2-large-train03-72K`) presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also:
* [blog entry](https://debatelab.github.io/journal/critical-thinking-language-models.html)
* [GitHub repo](https://github.com/debatelab/aacorpus)
* [paper](https://arxiv.org/pdf/2009.07185) |
debatelab/cript | aad306c8713386aeaba3fe2ccb499132fdafd423 | 2021-05-21T15:40:52.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"arxiv:2009.07185",
"transformers"
] | text-generation | false | debatelab | null | debatelab/cript | 4 | null | transformers | 18,482 | ---
language: en
tags:
- gpt2
---
# CRiPT Model (Critical Thinking Intermediarily Pretrained Transformer)
Small version of the trained model (`SYL01-2020-10-24-72K/gpt2-small-train03-72K`) presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also:
* [blog entry](https://debatelab.github.io/journal/critical-thinking-language-models.html)
* [GitHub repo](https://github.com/debatelab/aacorpus)
* [paper](https://arxiv.org/pdf/2009.07185)
|
dee4hf/autonlp-shajBERT-38639804 | 2787062e6d09b96c51c546d94ffa3e6bcf18eac6 | 2021-12-04T18:53:26.000Z | [
"pytorch",
"albert",
"text-classification",
"unk",
"dataset:dee4hf/autonlp-data-shajBERT",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | dee4hf | null | dee4hf/autonlp-shajBERT-38639804 | 4 | 1 | transformers | 18,483 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- dee4hf/autonlp-data-shajBERT
co2_eq_emissions: 11.98841452241473
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 38639804
- CO2 Emissions (in grams): 11.98841452241473
## Validation Metrics
- Loss: 0.421400249004364
- Accuracy: 0.86783988957902
- Macro F1: 0.8669477050676501
- Micro F1: 0.86783988957902
- Weighted F1: 0.86694770506765
- Macro Precision: 0.867606300132228
- Micro Precision: 0.86783988957902
- Weighted Precision: 0.8676063001322278
- Macro Recall: 0.86783988957902
- Micro Recall: 0.86783988957902
- Weighted Recall: 0.86783988957902
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/dee4hf/autonlp-shajBERT-38639804
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
deepdml/wav2vec2-base-timit-demo-colab | e508b33e9c9be7bf8b7202f4ad8d40dcdbfdd8bd | 2022-01-03T15:04:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | deepdml | null | deepdml/wav2vec2-base-timit-demo-colab | 4 | null | transformers | 18,484 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4798
- Wer: 0.3474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5229 | 4.0 | 500 | 1.6557 | 1.0422 |
| 0.6618 | 8.0 | 1000 | 0.4420 | 0.4469 |
| 0.2211 | 12.0 | 1500 | 0.4705 | 0.4002 |
| 0.1281 | 16.0 | 2000 | 0.4347 | 0.3688 |
| 0.0868 | 20.0 | 2500 | 0.4653 | 0.3590 |
| 0.062 | 24.0 | 3000 | 0.4747 | 0.3519 |
| 0.0472 | 28.0 | 3500 | 0.4798 | 0.3474 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
dehio/german-qg-t5-drink600 | aa192bef8376c1d72889bd6789d0e8585fe3a553 | 2022-01-19T16:38:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"de",
"dataset:deepset/germanquad",
"transformers",
"question generation",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | dehio | null | dehio/german-qg-t5-drink600 | 4 | null | transformers | 18,485 | ---
license: mit
widget:
- text: "generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung, die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert."
language:
- de
tags:
- question generation
datasets:
- deepset/germanquad
model-index:
- name: german-qg-t5-drink600
results: []
---
# german-qg-t5-drink600
This model is fine-tuned in question generation in German. The expected answer must be highlighted with <hl> token. It is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad) and further pre-trained on drink related questions.
## Task example
#### Input
generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung,
die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert.
#### Expected Question
Zu welchen Gelegenheiten passt der Monk Sour gut?
## Model description
The model is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad), which was pre-trained on [GermanQUAD](https://www.deepset.ai/germanquad). We further pre-trained it on questions annotated on drink receipts from [Mixology](https://mixology.eu/) ("drink600").
We have not yet open sourced the dataset, since we do not own copyright on the source material.
## Training and evaluation data
The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg).
## Evaluation
It achieves a **BLEU-4 score of 29.80** on the drink600 test set (n=120) and **11.30** on the GermanQUAD test set.
Thus, fine-tuning on drink600 did not affect performance on GermanQuAD.
In comparison, *german-qg-t5-quad* achieves a BLEU-4 score of **10.76** on the drink600 test set.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
demdecuong/stroke_sup_simcse | 28e6817e96f657bda6f8f9e31db2e0d31b9cf55e | 2021-06-01T17:17:14.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08821",
"transformers"
] | feature-extraction | false | demdecuong | null | demdecuong/stroke_sup_simcse | 4 | null | transformers | 18,486 | This is finetune version of [SimCSE: Simple Contrastive Learning of Sentence Embeddings](https://arxiv.org/abs/2104.08821)
- Train supervised on 100K triplet samples samples related to stroke domain from : stroke books, quora medical, quora's stroke, quora's general and human annotates.
- Positive sentences are generated by paraphrasing and back-translate.
- Negative sentences are randomly selected in general domain.
### Extract sentence representation
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("demdecuong/stroke_sup_simcse")
model = AutoModel.from_pretrained("demdecuong/stroke_sup_simcse")
text = "What are disease related to red stroke's causes?"
inputs = tokenizer(text, return_tensors='pt')
outputs = model(**inputs)[1]
```
### Build up embedding for database
```
database = [
'What is the daily checklist for stroke returning home',
'What are some tips for stroke adapt new life',
'What should I consider when using nursing-home care'
]
embedding = torch.zeros((len(database),768))
for i in range(len(database)):
inputs = tokenizer(database[i], return_tensors="pt")
outputs = model(**inputs)[1]
embedding[i] = outputs
print(embedding.shape)
```
### Result
On our company's PoC project, the testset contains positive/negative pairs of matching question related to stroke from human-generation.
- SimCSE supervised + 100k : Train on 100K triplet samples contains : medical, stroke and general domain
- SimCSE supervised + 42k : Train on 42K triplet samples contains : medical, stroke domain
| Model | Top-1 Accuracy |
| ------------- | ------------- |
| SimCSE supervised (author) | 75.83 |
| SimCSE unsupervised (ours) | 76.66 |
| SimCSE supervised + 100k (ours) | 73.33 |
| SimCSE supervised + 42k (ours) | 75.83 | |
devkushal75/medtextclassifier | 09a0a765177974fbf48ab5fe18988595b77d29c4 | 2021-09-26T10:26:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | devkushal75 | null | devkushal75/medtextclassifier | 4 | null | transformers | 18,487 | Entry not found |
devtrent/dummy-model | 34342b68fcc7760d09cbd5b98a92d22f6b07e882 | 2021-07-07T05:58:51.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | devtrent | null | devtrent/dummy-model | 4 | null | transformers | 18,488 | # Dummy Model
This be a dummmmmy |
diegozs97/finetuned-chemprot-seed-0-0k | 670789e2309381e3694790e9e7baa8e6262a78fe | 2021-12-07T05:07:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-0k | 4 | null | transformers | 18,489 | Entry not found |
diegozs97/finetuned-chemprot-seed-0-100k | de6b8794bf9939a39bed4c51efd3f4f2600f4e52 | 2021-12-07T05:10:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-100k | 4 | null | transformers | 18,490 | Entry not found |
diegozs97/finetuned-chemprot-seed-0-1800k | b86f0051a28babea2682752f53f631a3d8c4ee68 | 2021-12-07T05:15:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-1800k | 4 | null | transformers | 18,491 | Entry not found |
diegozs97/finetuned-chemprot-seed-0-200k | 60a001f6b0c621b393091cf1c99813a5ba4a4edb | 2021-12-07T05:11:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-200k | 4 | null | transformers | 18,492 | Entry not found |
diegozs97/finetuned-chemprot-seed-0-20k | 681f726b8de9e91b98c5d103de81c51f0034fcdf | 2021-12-07T05:08:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-20k | 4 | null | transformers | 18,493 | Entry not found |
diegozs97/finetuned-chemprot-seed-0-400k | 5481235f53e73cb1153621a785902e4ddd0a4ceb | 2021-12-07T05:12:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-400k | 4 | null | transformers | 18,494 | Entry not found |
diegozs97/finetuned-chemprot-seed-0-60k | 846859dfc21ce2dd0631ebc704843aab21756ea0 | 2021-12-07T05:09:46.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-60k | 4 | null | transformers | 18,495 | Entry not found |
diegozs97/finetuned-chemprot-seed-0-700k | 6fdbd39de344d14e27c4330053cda334a4309b47 | 2021-12-07T05:13:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-700k | 4 | null | transformers | 18,496 | Entry not found |
diegozs97/finetuned-chemprot-seed-1-1500k | edf0c62140b32de1cde47d96c6009df846b8689a | 2021-12-07T05:24:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-1-1500k | 4 | null | transformers | 18,497 | Entry not found |
diegozs97/finetuned-chemprot-seed-1-1800k | d77f00ee12b3087acd2a60d636ac8c5232fcb767 | 2021-12-07T05:25:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-1-1800k | 4 | null | transformers | 18,498 | Entry not found |
diegozs97/finetuned-chemprot-seed-1-200k | ddd61a1812234b610a7fc89bef81f63996988bd5 | 2021-12-07T05:21:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-1-200k | 4 | null | transformers | 18,499 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.