modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 12:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 498
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 12:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
BigSalmon/Points4 | BigSalmon | 2022-04-02T03:04:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-02T02:57:31Z | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/Points4")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/Points4")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27
Keywords to sentences or sentence. |
TheJarmanitor/fatima-fellowship-model | TheJarmanitor | 2022-04-02T03:03:42Z | 0 | 0 | null | [
"region:us"
] | null | 2022-04-02T03:01:06Z | model and notebook for the Fatima Fellowship 2022 coding Challenge
|
vicl/canine-s-finetuned-stsb | vicl | 2022-04-01T23:25:04Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"canine",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-01T19:47:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: canine-s-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8397182061195433
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-s-finetuned-stsb
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7223
- Pearson: 0.8397
- Spearmanr: 0.8397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.7938 | 0.8083 | 0.8077 |
| 1.278 | 2.0 | 720 | 0.7349 | 0.8322 | 0.8305 |
| 0.6765 | 3.0 | 1080 | 0.7075 | 0.8374 | 0.8366 |
| 0.6765 | 4.0 | 1440 | 0.7586 | 0.8360 | 0.8376 |
| 0.4629 | 5.0 | 1800 | 0.7223 | 0.8397 | 0.8397 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/chapocheck | huggingtweets | 2022-04-01T22:07:43Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-01T22:06:55Z | ---
language: en
thumbnail: http://www.huggingtweets.com/chapocheck/1648850858747/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1191821996759404547/HY5C5aOW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cum Town (mostly Nick Mullen) quotes</div>
<div style="text-align: center; font-size: 14px;">@chapocheck</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Cum Town (mostly Nick Mullen) quotes.
| Data | Cum Town (mostly Nick Mullen) quotes |
| --- | --- |
| Tweets downloaded | 1264 |
| Retweets | 90 |
| Short tweets | 75 |
| Tweets kept | 1099 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/x77h239f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chapocheck's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18r1isa5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18r1isa5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chapocheck')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lgris/bp400-xlsr | lgris | 2022-04-01T20:31:02Z | 91 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"hf-asr-leaderboard",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"arxiv:2107.11414",
"arxiv:2012.03411",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
- hf-asr-leaderboard
model-index:
- name: bp400-xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER
type: wer
value: 14.0
license: apache-2.0
---
# bp400-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
**Paper:** https://arxiv.org/abs/2107.11414
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
- [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt).
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
- [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
- [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 93.9h | -- | 5.4h |
| Common Voice | 37.6h | 8.9h | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | 161.0h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 144.2h | -- | 1.8h |
| SID | 5.0h | -- | 1.0h |
| VoxForge | 2.8h | -- | 0.1h |
| Total | 437.2h | 8.9h | 21.6h |
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/drive/folders/1eRUExXRF2XK8JxUjIzbLBkLa5wuR3nig?usp=sharing).
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| bp\_400 (demonstration below) | 0.052 | 0.140 | 0.074 | 0.117 | 0.121 | 0.245 | 0.118 | 0.124 |
| bp\_400 + 3-gram | 0.033 | 0.095 | 0.046 | 0.123 | 0.112 | 0.212 | 0.123 | 0.106 |
| bp\_400 + 4-gram (demonstration below) | **0.030** | 0.096 | 0.043 | **0.106** | 0.118 | 0.229 | **0.117** | **0.105** |
| bp\_400 + 5-gram | 0.033 | 0.094 | 0.043 | 0.123 | **0.111** | **0.210** | 0.123 | **0.105** |
| bp\_400 + Transf. | 0.032 | **0.092** | **0.036** | 0.130 | 0.115 | 0.215 | 0.125 | 0.106 |
#### Transcription examples
| Text | Transcription |
|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|alguém sabe a que horas começa o jantar | alguém sabe a que horas **começo** jantar |
|lila covas ainda não sabe o que vai fazer no fundo|**lilacovas** ainda não sabe o que vai fazer no fundo|
|que tal um pouco desse bom spaghetti|**quetá** um pouco **deste** bom **ispaguete**|
|hong kong em cantonês significa porto perfumado|**rongkong** **en** **cantones** significa porto perfumado|
|vamos hackear esse problema|vamos **rackar** esse problema|
|apenas a poucos metros há uma estação de ônibus|apenas **ha** poucos metros **á** uma estação de ônibus|
|relâmpago e trovão sempre andam juntos|**relampagotrevão** sempre andam juntos|
## Demonstration
```python
MODEL_NAME = "lgris/bp400-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.05159104708285062
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.14031426198658084
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07432133838383838
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.11678793514817509
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.12152357273433984
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.24666815906766504
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.11873106060606062
### Tests with LM
```python
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
### Cetuc
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.030266462438593742
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.09577710237417715
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.043617424242424235
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.10642133314350002
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.11839021001747055
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.22929952467810416
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.11716314935064935
|
birgermoell/psst-libri960_big | birgermoell | 2022-04-01T20:17:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-01T19:05:31Z | pssteval INFO: ASR metrics for split `valid` FER: 9.8% PER: 20.9% |
juaner/distilbert-base-uncased-finetuned-cola | juaner | 2022-04-01T18:20:42Z | 5 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-01T17:59:52Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: juaner/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juaner/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1909
- Validation Loss: 0.5553
- Train Matthews Correlation: 0.5279
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5191 | 0.4491 | 0.4718 | 0 |
| 0.3270 | 0.4571 | 0.5196 | 1 |
| 0.1909 | 0.5553 | 0.5279 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
FrankCorrigan/results | FrankCorrigan | 2022-04-01T18:15:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-01T01:41:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [linydub/bart-large-samsum](https://huggingface.co/linydub/bart-large-samsum) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 0.9563 |
| No log | 2.0 | 2 | 0.9877 |
| No log | 3.0 | 3 | 1.0158 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
FrankCorrigan/test-model | FrankCorrigan | 2022-04-01T17:54:00Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-04-01T01:46:45Z | ---
license: apache-2.0
---
|
McGill-NLP/bart-qg-nq-checkpoint | McGill-NLP | 2022-04-01T17:35:04Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.13461",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-04-01T16:32:49Z | ---
license: cc-by-4.0
---
# BART-base fine-tuned on NaturalQuestions for **Question Generation**
[BART Model](https://arxiv.org/pdf/1910.13461.pdf) fine-tuned on [Google NaturalQuestions](https://ai.google.com/research/NaturalQuestions/) for **Question Generation** by treating long answer as input, and question as output.
## Details of BART
The **BART** model was presented in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by *Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer* in Here the abstract:
We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.
## Details of the downstream task (QG) - Dataset 📚 🧐
Dataset: ```NaturalQuestions``` from Google (https://ai.google.com/research/NaturalQuestions/)
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| NaturalQuestions | train | 97650 |
| NaturalQuestions | valid | 10850 |
## Model fine-tuning 🏋️
The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/QG/train.py)
## Model in Action 🚀
```python
from transformers import AutoModel, BartTokenizer
#Load the tokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
#Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("McGill-NLP/bart-qg-nq-checkpoint")
```
## Citation
If you want to cite this model you can use this:
```bibtex
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
}
```
> Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
bitsanlp/distilbert-base-uncased-distilbert-fakenews-detection | bitsanlp | 2022-04-01T17:17:55Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-01T16:12:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-distilbert-fakenews-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilbert-fakenews-detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.0125 | 1.0 | 978 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 2.0 | 1956 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 3.0 | 2934 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ahmedzaky91/Fatima-Fake_news_calssifier | ahmedzaky91 | 2022-04-01T16:54:24Z | 0 | 0 | null | [
"region:us"
] | null | 2022-04-01T00:00:39Z | ## This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on Fake and real dataset on kaggle
## The following hyperparameters were used during training:
learning_rate: 5e-05
train_batch_size: 8
num_epochs: 2
|
ydshieh/bert-base-uncased-yelp-polarity | ydshieh | 2022-04-01T15:20:05Z | 103 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-01T15:17:35Z | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 16, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9699473684210527, as measured by the
eval set accuracy, found after 4 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
avialfont/ner-dummy-model | avialfont | 2022-04-01T14:59:22Z | 5 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-04-01T10:59:27Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ner-dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ner-dummy-model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
somosnlp-hackathon-2022/es_tweets_laboral | somosnlp-hackathon-2022 | 2022-04-01T14:50:40Z | 1 | 1 | spacy | [
"spacy",
"text-classification",
"es",
"region:us"
] | text-classification | 2022-04-01T13:48:09Z | ---
tags:
- spacy
- text-classification
language: es
widget:
- text: "todos merecemos un salario justo"
---
## es_tweets_laboral ##
Modelo creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
|
eren23/pneumonia_test_attempt | eren23 | 2022-04-01T14:41:01Z | 57 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-19T16:31:28Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pneumonia_test_attempt
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9783163070678711
---
# pneumonia-bielefeld-dl-course
This registry contains the model for making pneumonia predictions and was prepared for Bielefeld University Deep Learning course homework.
The code used for this implementation mostly comes from here: https://github.com/nateraw/huggingpics it was a ready pipeline for model fine-tuning with huggingface and PyTorch Lightning for another dataset. |
notexist/ttt | notexist | 2022-04-01T13:16:50Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-01T12:45:30Z | ---
license: apache-2.0
---
|
bmichele/poetry-generation-firstline-mbart-ws-fi-sorted | bmichele | 2022-04-01T13:03:49Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2022-04-01T12:58:00Z | TODO: This is still a demo model, the file does not match with the model card!!!
# poetry-generation-firstline-mbart-ws-fi-sorted
* `nextline`: generates the first poem line from keywords
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `ws`: trained on Wikisource data
* `fi`: Finnish language
* `sorted`: the order of input keywords matter when generating candidates |
bharatR/up_down | bharatR | 2022-04-01T12:38:05Z | 0 | 0 | null | [
"classification",
"en",
"dataset:cifar10-custom",
"region:us"
] | null | 2022-04-01T12:19:00Z | ---
language: en
tags:
- classification
datasets:
- cifar10-custom
metrics:
- accuracy
---
# Up-Down Classification
This repo has the weights of resnet-18 model training on cifar-10 custom data, where some images are made upside down, and the goal is to predict the orientation of the image(0/1 classification task). |
birgermoell/psst-base-rep | birgermoell | 2022-04-01T12:02:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-01T07:58:20Z | The model is a reproduction of the baseline trained with Wav2vec2-small on PSST
pssteval INFO: ASR metrics for split `valid` FER: 10.4% PER: 23.1% |
bmichele/poetry-generation-nextline-mbart-ws-fi-single | bmichele | 2022-04-01T11:51:32Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2022-04-01T11:35:07Z | # poetry-generation-nextline-mbart-ws-fi-single
* `nextline`: generates a poem line from previous line(s)
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `ws`: trained on Wikisource data
* `fi`: Finnish language
* `single`: uses only last poem line as input for generation |
z5ying/distilgpt2-finetuned-wikitext2 | z5ying | 2022-04-01T10:47:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-01T07:10:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [z5ying/distilgpt2-finetuned-wikitext2](https://huggingface.co/z5ying/distilgpt2-finetuned-wikitext2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 118 | 3.0306 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
blacktree/distilbert-base-uncased-finetuned-cola | blacktree | 2022-04-01T09:00:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-31T15:48:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5285676961321106
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4883
- Matthews Correlation: 0.5286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5269 | 1.0 | 535 | 0.5197 | 0.4187 |
| 0.3477 | 2.0 | 1070 | 0.4883 | 0.5286 |
| 0.2333 | 3.0 | 1605 | 0.6530 | 0.5079 |
| 0.17 | 4.0 | 2140 | 0.7567 | 0.5272 |
| 0.1271 | 5.0 | 2675 | 0.8887 | 0.5259 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
Basedino/GPT-RO | Basedino | 2022-04-01T07:47:41Z | 0 | 0 | null | [
"license:gpl-3.0",
"region:us"
] | null | 2022-03-31T08:19:30Z | ---
license: gpl-3.0
---
So i made this model because i had nothing to do. it's gpt 2 124m finetuned to a bunch of italian recipes.
I made it using aitextgen, so you can use that to play with the model easily. |
yy642/bert-base-uncased-finetuned-mnli-rte-wnli-10 | yy642 | 2022-04-01T06:04:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-31T23:51:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-mnli-rte-wnli-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mnli-rte-wnli-10
This model is a fine-tuned version of [yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5](https://huggingface.co/yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5876
- Accuracy: 0.9206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0641 | 1.0 | 16558 | 0.4528 | 0.9138 |
| 0.0479 | 2.0 | 33116 | 0.5116 | 0.9153 |
| 0.0363 | 3.0 | 49674 | 0.5660 | 0.9138 |
| 0.0244 | 4.0 | 66232 | 0.5876 | 0.9206 |
| 0.0145 | 5.0 | 82790 | 0.6156 | 0.9192 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.0.0
- Tokenizers 0.11.6
|
z5ying/mbart-large-cc25-finetuned-source-to-target | z5ying | 2022-04-01T03:43:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-07T18:25:31Z | ---
tags:
- generated_from_trainer
model-index:
- name: mbart-large-cc25-finetuned-source-to-target
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-source-to-target
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
dchung117/distilbert-base-uncased-finetuned-squad-d5716d28 | dchung117 | 2022-04-01T02:02:28Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-04-01T01:51:41Z | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Mr-Wick/xlnet-base-cased | Mr-Wick | 2022-04-01T01:31:59Z | 3 | 0 | transformers | [
"transformers",
"tf",
"xlnet",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-26T12:52:07Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: xlnet-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16530, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.12.0
|
emre/distilgpt2-pretrained-tr-10e | emre | 2022-03-31T22:10:43Z | 4 | 0 | transformers | [
"transformers",
"jax",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-31T21:59:43Z | ---
license: apache-2.0
---
|
arjundd/dosma-models | arjundd | 2022-03-31T21:39:54Z | 0 | 0 | null | [
"mri",
"knee",
"segmentation",
"en",
"region:us"
] | null | 2022-03-31T18:30:03Z | ---
language: en
tags:
- mri
- knee
- segmentation
---
# DOSMA models
These models are those that are made publicly available in the [DOSMA](https://github.com/ad12/DOSMA).
More information on these models can be found in the [documentation](https://dosma.readthedocs.io/en/latest/models.html).
## Citation
If you use any models, please cite any reference for the model in addition to the DOSMA reference below:
```
@inproceedings{desai2019dosma,
title={DOSMA: A deep-learning, open-source framework for musculoskeletal MRI analysis},
author={Desai, Arjun D and Barbieri, Marco and Mazzoli, Valentina and Rubin, Elka and Black, Marianne S and Watkins, Lauren E and Gold, Garry E and Hargreaves, Brian A and Chaudhari, Akshay S},
booktitle={Proc 27th Annual Meeting ISMRM, Montreal},
pages={1135},
year={2019}
}
``` |
israel/fake-news-classification | israel | 2022-03-31T21:03:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-31T16:35:48Z | ---
license: mit
---
# Fake and real news classification task
Model : [DistilRoBERTa base model](https://huggingface.co/distilroberta-base)
Dataset : [Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
|
magitz/distilbert-base-uncased-finetuned-emotion | magitz | 2022-03-31T20:48:43Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-31T20:41:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9267965474109292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2235
- Accuracy: 0.9265
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8101 | 1.0 | 250 | 0.3177 | 0.9045 | 0.9010 |
| 0.2472 | 2.0 | 500 | 0.2235 | 0.9265 | 0.9268 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.18.3
- Tokenizers 0.11.0
|
WENGSYX/Deberta-Chinese-Large | WENGSYX | 2022-03-31T20:08:59Z | 56 | 16 | transformers | [
"transformers",
"pytorch",
"deberta",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # Deberta-Chinese
本项目,基于微软开源的Deberta模型,在中文领域进行预训练。开源本模型,旨在为其他人提供更多预训练语言模型选择。
本预训练模型,基于WuDaoCorpora语料库预训练而成。WuDaoCorpora是北京智源人工智能研究院(智源研究院)构建的大规模、高质量数据集,用于支撑“悟道”大模型项目研究。
使用WWM与n-gramMLM 等预训练方法进行预训练。
| 预训练模型 | 学习率 | batchsize | 设备 | 语料库 | 时间 | 优化器 |
| --------------------- | ------ | --------- | ------ | ------ | ---- | ------ |
| Deberta-Chinese-Large | 1e-5 | 512 | 2*3090 | 200G | 14天 | AdamW |
### 加载与使用
依托于huggingface-transformers
```
tokenizer = BertTokenizer.from_pretrained("WENGSYX/Deberta-Chinese-Large")
model = AutoModel.from_pretrained("WENGSYX/Deberta-Chinese-Large")
```
#### 注意,请使用BertTokenizer加载中文词表
|
novarac23/distilbert-base-uncased-finetuned-emotion | novarac23 | 2022-03-31T19:39:15Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-31T19:05:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251919899321654
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2234
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8213 | 1.0 | 250 | 0.3210 | 0.9025 | 0.8989 |
| 0.2463 | 2.0 | 500 | 0.2234 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
deepspeechvision/wav2vec2_hindi_asr | deepspeechvision | 2022-03-31T18:03:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-31T17:22:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_hindi_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_hindi_asr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
israfelsr/UpsideDownClassifier | israfelsr | 2022-03-31T17:06:27Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-31T15:41:33Z | # UpsideDownClassifier
This classifier was trained using the [auto-cats-and-dogs](https://huggingface.co/datasets/nateraw/auto-cats-and-dogs) dataset. It was trained over 5 epochs using a pretrained resent18.
The configuration for the model was
```
config = {
"batch_size": 64,
"num_epochs": 5,
"lr": 0.005,
"betas": (0.9, 0.999),
"eps": 1e-6,
"lr": 8e-3,
"do_eval": True
}
```
## Traning Plots
We can see in the figures below the training plots for accuracy and the loss in both, training and validation sets.
### Accuracy Plot

### Loss Plot

## Some Results
Evaluating on the Test Set, we obtain:
- Accuracy = 0.9696
A batch with some missclassifications can be seen in the picture below.
 |
eren23/pneumonia-bielefeld-dl-course | eren23 | 2022-03-31T15:55:27Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-27T12:17:21Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pneumonia-bielefeld-dl-course
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8456632494926453
---
# pneumonia-bielefeld-dl-course
This registry contains the model for making pneumonia predictions and was prepared for
Bielefeld University Deep Learning course homework.
The code used for this implementation mostly comes from here: https://github.com/nateraw/huggingpics it was a ready pipeline for model fine-tuning with huggingface and PyTorch Lightning for another dataset.
|
huggingtweets/timdingmanlive | huggingtweets | 2022-03-31T14:30:05Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-31T14:26:57Z | ---
language: en
thumbnail: http://www.huggingtweets.com/timdingmanlive/1648736999131/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2844974270/7bb6450b90b65f8712d9433b8d5e1971_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tim Dingman</div>
<div style="text-align: center; font-size: 14px;">@timdingmanlive</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tim Dingman.
| Data | Tim Dingman |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 555 |
| Short tweets | 138 |
| Tweets kept | 2547 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7yvdv2z7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @timdingmanlive's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/311pu3zj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/311pu3zj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/timdingmanlive')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/youtube | huggingtweets | 2022-03-31T14:06:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-31T14:05:50Z | ---
language: en
thumbnail: http://www.huggingtweets.com/youtube/1648735587597/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1427292844612595720/RC1YSvuT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">YouTube</div>
<div style="text-align: center; font-size: 14px;">@youtube</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from YouTube.
| Data | YouTube |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 23 |
| Short tweets | 104 |
| Tweets kept | 3123 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dx34obn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @youtube's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p527w5q3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/youtube')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Edresson/wav2vec2-large-xlsr-coraa-portuguese | Edresson | 2022-03-31T13:28:43Z | 632 | 15 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"hf-asr-leaderboard",
"PyTorch",
"dataset:CORAA",
"arxiv:2110.15731",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language: pt
datasets:
- CORAA
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- hf-asr-leaderboard
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: CORAA
type: CORAA
args: pt
metrics:
- name: Test CORAA WER
type: wer
value: 25.26
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER on Common Voice 7
type: wer
value: 20.08
---
# Wav2vec 2.0 trained with CORAA Portuguese Dataset
This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following [CORAA dataset](https://github.com/nilc-nlp/CORAA)
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
```
# Results
For the results check the [CORAA article](https://arxiv.org/abs/2110.15731)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
Visual-Attention-Network/van-large | Visual-Attention-Network | 2022-03-31T12:45:46Z | 122 | 1 | transformers | [
"transformers",
"pytorch",
"van",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2202.09741",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-09T18:03:37Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Van
Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification).
Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, VanForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base")
>>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van). |
Visual-Attention-Network/van-base | Visual-Attention-Network | 2022-03-31T12:45:44Z | 185 | 1 | transformers | [
"transformers",
"pytorch",
"van",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2202.09741",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-16T15:06:37Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Van
Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification).
Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, VanForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base")
>>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van). |
mustapha/flipped-image-ViT | mustapha | 2022-03-31T12:30:19Z | 61 | 2 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-30T21:57:42Z | Hello world,
This model have been created in the context of ` Fatima Fellowship Programme`. The model was trained on the Cifar10 dataset with a googd final accuracy of arround 98%.
This model determines wether an image is flipped of not. |
Khalsuu/2nd-wav2vec2-l-xls-r-300m-turkish-test | Khalsuu | 2022-03-31T12:09:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-31T08:45:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: 2nd-wav2vec2-l-xls-r-300m-turkish-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2nd-wav2vec2-l-xls-r-300m-turkish-test
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6019
- Wer: 0.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0522 | 3.67 | 400 | 0.7773 | 0.7296 |
| 0.5369 | 7.34 | 800 | 0.6282 | 0.5888 |
| 0.276 | 11.01 | 1200 | 0.5998 | 0.5330 |
| 0.1725 | 14.68 | 1600 | 0.5859 | 0.4908 |
| 0.1177 | 18.35 | 2000 | 0.6019 | 0.4444 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Neulvo/bert-finetuned-squad | Neulvo | 2022-03-31T12:08:42Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-31T10:54:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
YiTian/wav2vec2-common_voice-tr-demo | YiTian | 2022-03-31T11:40:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-31T09:39:08Z | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9841
- Wer: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 7.14 | 100 | 3.6689 | 1.0 |
| No log | 14.29 | 200 | 3.0280 | 0.9999 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0
- Datasets 1.18.0
- Tokenizers 0.11.6
|
frtna/jwt300_mt-Italian-to-Spanish_transformers | frtna | 2022-03-31T11:18:09Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:new_dataset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-29T09:49:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- new_dataset
metrics:
- sacrebleu
model-index:
- name: jwt300_mt-Italian-to-Spanish_transformers
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: new_dataset
type: new_dataset
args: jwt300_mt
metrics:
- name: Sacrebleu
type: sacrebleu
value: 0.9057
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jwt300_mt-Italian-to-Spanish_transformers
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the new_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4425
- Sacrebleu: 0.9057
- Gen Len: 18.1276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 2.7545 | 1.0 | 2229 | 2.4425 | 0.9057 | 18.1276 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nikhil6041/wav2vec2-commonvoice-tamil | nikhil6041 | 2022-03-31T09:24:01Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-31T04:00:23Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-commonvoice-tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-commonvoice-tamil
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-tamil-tam-250](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-tamil-tam-250) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3415
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.384 | 1.69 | 200 | 3.3400 | 1.0 |
| 3.3085 | 3.39 | 400 | 3.3609 | 1.0 |
| 3.3008 | 5.08 | 600 | 3.3331 | 1.0 |
| 3.2852 | 6.78 | 800 | 3.3492 | 1.0 |
| 3.2908 | 8.47 | 1000 | 3.3318 | 1.0 |
| 3.2865 | 10.17 | 1200 | 3.3501 | 1.0 |
| 3.2826 | 11.86 | 1400 | 3.3403 | 1.0 |
| 3.2875 | 13.56 | 1600 | 3.3335 | 1.0 |
| 3.2899 | 15.25 | 1800 | 3.3311 | 1.0 |
| 3.2755 | 16.95 | 2000 | 3.3617 | 1.0 |
| 3.2877 | 18.64 | 2200 | 3.3317 | 1.0 |
| 3.2854 | 20.34 | 2400 | 3.3560 | 1.0 |
| 3.2878 | 22.03 | 2600 | 3.3332 | 1.0 |
| 3.2766 | 23.73 | 2800 | 3.3317 | 1.0 |
| 3.2943 | 25.42 | 3000 | 3.3737 | 1.0 |
| 3.2845 | 27.12 | 3200 | 3.3347 | 1.0 |
| 3.2765 | 28.81 | 3400 | 3.3415 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
emiyasstar/ch-w2v-conformer | emiyasstar | 2022-03-31T08:48:13Z | 0 | 2 | null | [
"region:us"
] | null | 2022-03-29T15:44:56Z | The ch-w2v-conformer model uses following datasets to pretrain:
ISML datasets (6 languages,70k hours): internal dataset contains 40k hours Chinese, Cantonese, Tibetan, Inner Mongolian, Inner Kazakh, Uighur.
Babel datasets (17 languages, 2k hours): Assamese, Bengali, Cantonese, Cebuano, Georgian, Haitian, Kazakh, Kurmanji, Lao, Pashto, Swahili, Tagalog, Tamil, Tok, Turkish, Vietnamese, Zulu
After pretraining, we build ASR system based on CTC-Attention structure. In very low resource task, we find that if too many initialization network structures are constructed in the upper layer of pre-training conformer encoder, the migration performance of the pre-training model will be destroyed, so we only build a single-layer transformer decoder for joint training.
pretrained model link:
## constrained-plus Task Performance
* Languages: Cantonese,mongolian,kazakh
* config: conf/train_conformer_large_10h.yaml
* Feature info: using mfcc feature, with dither 1.0, without cmvn
* Training info: lr 0.001, batch size 10, 4 gpus on V100, acc_grad 1, 80 epochs
* Decoding info: ctc_weight 0.5, average_num 35
dev set results trained only with 10 hours training set
## w2v-Conformer
| decoding_method | Cantonese(CER) | mongolian(WER) |
|:-------------------:|:----:|:----:|
| ctc_greedy_search | 31.46 | 53.64 |
| ctc_prefix_search | 31.47 | 53.50 |
| attention_rescoring | 31.45 | 52.96 |
## Conformer (train from scartch)
| decoding_method | Cantonese(CER) | mongolian(WER) |
|:-------------------:|----:|:----:|
| ctc_greedy_search | 61.43 | 89.38 |
| ctc_prefix_search | 61.37 | 89.53|
| attention_rescoring | 60.61 | 89.60| |
DrishtiSharma/poem-gen-spanish-t5-small-test | DrishtiSharma | 2022-03-31T03:53:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-30T19:55:28Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-test
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.8391 | 0.73 | 30000 | 2.9486 |
| 2.6782 | 1.46 | 60000 | 2.8990 |
| 2.5323 | 2.19 | 90000 | 2.9193 |
| 2.5191 | 2.93 | 120000 | 2.8982 |
| 2.4007 | 3.66 | 150000 | 2.9241 |
| 2.2909 | 4.39 | 180000 | 2.9418 |
| 2.1741 | 5.12 | 210000 | 2.9783 |
| 2.1973 | 5.85 | 240000 | 2.9671 |
| 2.0969 | 6.58 | 270000 | 3.0179 |
| 1.9818 | 7.31 | 300000 | 3.0582 |
| 1.8639 | 8.05 | 330000 | 3.0918 |
| 1.8824 | 8.78 | 360000 | 3.1095 |
| 1.7929 | 9.51 | 390000 | 3.1502 |
| 1.7247 | 10.24 | 420000 | 3.1855 |
| 1.7039 | 10.97 | 450000 | 3.1953 |
| 1.6475 | 11.7 | 480000 | 3.2180 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lazyturtl/roomclassifier | lazyturtl | 2022-03-31T01:09:57Z | 2,692 | 16 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-31T01:09:48Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: roomclassifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9402984976768494
---
# roomclassifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bathroom

#### Bedroom

#### DinningRoom

#### Kitchen

#### Laundry room

#### Livingroom
 |
michiyasunaga/BioLinkBERT-large | michiyasunaga | 2022-03-31T00:54:57Z | 4,470 | 33 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"exbert",
"linkbert",
"biolinkbert",
"fill-mask",
"question-answering",
"text-classification",
"token-classification",
"en",
"dataset:pubmed",
"arxiv:2203.15827",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-08T06:20:38Z | ---
license: apache-2.0
language: en
datasets:
- pubmed
tags:
- bert
- exbert
- linkbert
- biolinkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
widget:
- text: "Sunitinib is a tyrosine kinase inhibitor"
---
## BioLinkBERT-large
BioLinkBERT-large model pretrained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts along with citation link information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT).
This model achieves state-of-the-art performance on several biomedical NLP benchmarks such as [BLURB](https://microsoft.github.io/BLURB/) and [MedQA-USMLE](https://github.com/jind11/MedQA).
## Model description
LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document.
LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval).
## Intended uses & limitations
The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification.
You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text).
### How to use
To use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/BioLinkBERT-large')
model = AutoModel.from_pretrained('michiyasunaga/BioLinkBERT-large')
inputs = tokenizer("Sunitinib is a tyrosine kinase inhibitor", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases.
## Evaluation results
When fine-tuned on downstream tasks, LinkBERT achieves the following results.
**Biomedical benchmarks ([BLURB](https://microsoft.github.io/BLURB/), [MedQA](https://github.com/jind11/MedQA), [MMLU](https://github.com/hendrycks/test), etc.):** BioLinkBERT attains new state-of-the-art.
| | BLURB score | PubMedQA | BioASQ | MedQA-USMLE |
| ---------------------- | -------- | -------- | ------- | -------- |
| PubmedBERT-base | 81.10 | 55.8 | 87.5 | 38.1 |
| **BioLinkBERT-base** | **83.39** | **70.2** | **91.4** | **40.0** |
| **BioLinkBERT-large** | **84.30** | **72.2** | **94.8** | **44.6** |
| | MMLU-professional medicine |
| ---------------------- | -------- |
| GPT-3 (175 params) | 38.7 |
| UnifiedQA (11B params) | 43.2 |
| **BioLinkBERT-large (340M params)** | **50.7** |
## Citation
If you find LinkBERT useful in your project, please cite the following:
```bibtex
@InProceedings{yasunaga2022linkbert,
author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang},
title = {LinkBERT: Pretraining Language Models with Document Links},
year = {2022},
booktitle = {Association for Computational Linguistics (ACL)},
}
```
|
michiyasunaga/LinkBERT-large | michiyasunaga | 2022-03-31T00:27:01Z | 1,297 | 11 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"exbert",
"linkbert",
"fill-mask",
"question-answering",
"text-classification",
"token-classification",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2203.15827",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-08T01:42:14Z | ---
license: apache-2.0
language: en
datasets:
- wikipedia
- bookcorpus
tags:
- bert
- exbert
- linkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
---
## LinkBERT-large
LinkBERT-large model pretrained on English Wikipedia articles along with hyperlink information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT).
## Model description
LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document.
LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval).
## Intended uses & limitations
The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification.
You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text).
### How to use
To use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/LinkBERT-large')
model = AutoModel.from_pretrained('michiyasunaga/LinkBERT-large')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases.
## Evaluation results
When fine-tuned on downstream tasks, LinkBERT achieves the following results.
**General benchmarks ([MRQA](https://github.com/mrqa/MRQA-Shared-Task-2019) and [GLUE](https://gluebenchmark.com/)):**
| | HotpotQA | TriviaQA | SearchQA | NaturalQ | NewsQA | SQuAD | GLUE |
| ---------------------- | -------- | -------- | -------- | -------- | ------ | ----- | -------- |
| | F1 | F1 | F1 | F1 | F1 | F1 | Avg score |
| BERT-base | 76.0 | 70.3 | 74.2 | 76.5 | 65.7 | 88.7 | 79.2 |
| **LinkBERT-base** | **78.2** | **73.9** | **76.8** | **78.3** | **69.3** | **90.1** | **79.6** |
| BERT-large | 78.1 | 73.7 | 78.3 | 79.0 | 70.9 | 91.1 | 80.7 |
| **LinkBERT-large** | **80.8** | **78.2** | **80.5** | **81.0** | **72.6** | **92.7** | **81.1** |
## Citation
If you find LinkBERT useful in your project, please cite the following:
```bibtex
@InProceedings{yasunaga2022linkbert,
author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang},
title = {LinkBERT: Pretraining Language Models with Document Links},
year = {2022},
booktitle = {Association for Computational Linguistics (ACL)},
}
```
|
GleamEyeBeast/ascend_with_english | GleamEyeBeast | 2022-03-30T23:35:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:timit_asr",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-30T22:09:15Z | ---
tags:
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: ascend_with_english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ascend_with_english
This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on the timit_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3049
- Wer: 0.2251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.3524 | 0.3016 |
| 0.4246 | 2.0 | 578 | 0.3132 | 0.2607 |
| 0.4246 | 3.0 | 867 | 0.3044 | 0.2373 |
| 0.2008 | 4.0 | 1156 | 0.3075 | 0.2302 |
| 0.2008 | 5.0 | 1445 | 0.3049 | 0.2251 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mrm8488/legalectra-small-spanish | mrm8488 | 2022-03-30T21:06:31Z | 41 | 3 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"Spanish",
"Electra",
"Legal",
"es",
"dataset:Spanish-legal-corpora",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: es
tags:
- Spanish
- Electra
- Legal
datasets:
- Spanish-legal-corpora
---
## LEGALECTRA ⚖️
**LEGALECTRA** (small) is an Electra like model (discriminator in this case) trained on [A collection of corpora of Spanish legal domain](https://zenodo.org/record/5495529#.YZItp3vMLJw).
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Training details
The model was trained using the Electra base code for 3 days on 1 Tesla V100 16GB.
## Model details ⚙
|Param| # Value|
|-----|--------|
|Layers| 12 |
|Hidden | 256 |
|Params| 14M |
## Evaluation metrics (for discriminator) 🧾
|Metric | # Score |
|-------|---------|
|Accuracy| 0.955|
|Precision| 0.790|
|AUC | 0.971|
## Benchmarks 🔨
WIP 🚧
## How to use the discriminator in `transformers`
TBA
## Acknowledgments
TBA
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2022legalectra,
title={Spanish Legal Electra (small)},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/legalectra-small-spanish},
year={2022}
}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
vlsb/autotrain-security-texts-classification-roberta-688020754 | vlsb | 2022-03-30T20:55:42Z | 15 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:vlsb/autotrain-data-security-texts-classification-roberta",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-30T20:52:41Z | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- vlsb/autotrain-data-security-texts-classification-roberta
co2_eq_emissions: 3.1151249696839685
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 688020754
- CO2 Emissions (in grams): 3.1151249696839685
## Validation Metrics
- Loss: 0.2810373902320862
- Accuracy: 0.8928571428571429
- Precision: 0.9272727272727272
- Recall: 0.8869565217391304
- AUC: 0.9500805152979066
- F1: 0.9066666666666666
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-texts-classification-roberta-688020754
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-texts-classification-roberta-688020754", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-texts-classification-roberta-688020754", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
mrm8488/electricidad-base-discriminator | mrm8488 | 2022-03-30T20:42:47Z | 74 | 4 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"Spanish",
"Electra",
"es",
"dataset:-large_spanish_corpus",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: es
thumbnail: https://i.imgur.com/uxAvBfh.png
tags:
- Spanish
- Electra
datasets:
-large_spanish_corpus
---
## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh)
**Electricidad-base-discriminator** (uncased) is a ```base``` Electra like model (discriminator in this case) trained on a [Large Spanish Corpus](https://github.com/josecannete/spanish-corpora) (aka BETO's corpus)
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Model details ⚙
|Name| # Value|
|-----|--------|
|Layers| 12 |
|Hidden | 768 |
|Params| 110M |
## Evaluation metrics (for discriminator) 🧾
|Metric | # Score |
|-------|---------|
|Accuracy| 0.985|
|Precision| 0.726|
|AUC | 0.922|
## Fast example of usage 🚀
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("mrm8488/electricidad-base-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/electricidad-base-discriminator")
sentence = "El rápido zorro marrón salta sobre el perro perezoso"
fake_sentence = "El rápido zorro marrón amar sobre el perro perezoso"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % prediction, end="") for prediction in predictions.tolist()]
# Output:
'''
el rapido zorro marro ##n amar sobre el perro pere ##zoso 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0[None, None, None, None, None, None, None, None, None, None, None, None, None
'''
```
As you can see there are **1s** in the places where the model detected a fake token. So, it works! 🎉
### Some models fine-tuned on a downstream task 🛠️
[Question Answering](https://huggingface.co/mrm8488/electricidad-base-finetuned-squadv1-es)
[POS](https://huggingface.co/mrm8488/electricidad-base-finetuned-pos)
[NER](https://huggingface.co/mrm8488/electricidad-base-finetuned-ner)
### Spanish LM model comparison 📊
| Dataset | Metric | RoBERTa-b | RoBERTa-l | BETO | mBERT | BERTIN | Electricidad-b |
|-------------|----------|-----------|-----------|--------|--------|--------|---------|
| UD-POS | F1 | 0.9907 | 0.9901 | 0.9900 | 0.9886 | 0.9904 | 0.9818 |
| Conll-NER | F1 | 0.8851 | 0.8772 | 0.8759 | 0.8691 | 0.8627 | 0.7954 |
| Capitel-POS | F1 | 0.9846 | 0.9851 | 0.9836 | 0.9839 | 0.9826 | 0.9816 |
| Capitel-NER | F1 | 0.8959 | 0.8998 | 0.8771 | 0.8810 | 0.8741 | 0.8035 |
| STS | Combined | 0.8423 | 0.8420 | 0.8216 | 0.8249 | 0.7822 | 0.8065 |
| MLDoc | Accuracy | 0.9595 | 0.9600 | 0.9650 | 0.9560 | 0.9673 | 0.9490 |
| PAWS-X | F1 | 0.9035 | 0.9000 | 0.8915 | 0.9020 | 0.8820 | **0.9045** |
| XNLI | Accuracy | 0.8016 | 0.7958 | 0.8130 | 0.7876 | 0.7864 | 0.7878 |
## Acknowledgments
I thank [🤗/transformers team](https://github.com/huggingface/transformers) for allowing me to train the model (specially to [Julien Chaumond](https://twitter.com/julien_c)).
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2020electricidad-base-discriminator,
title={Spanish Electra by Manuel Romero},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/electricidad-base-discriminator/}},
year={2020}
}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/longformer-base-4096-spanish | mrm8488 | 2022-03-30T20:36:36Z | 49 | 16 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"Long documents",
"longformer",
"bertin",
"spanish",
"es",
"dataset:spanish_large_corpus",
"arxiv:2004.05150",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- es
license: mit
widget:
- text: "Manuel Romero ha creado con el equipo de BERTIN un modelo que procesa documentos <mask> largos."
tags:
- Long documents
- longformer
- bertin
- spanish
datasets:
- spanish_large_corpus
---
# longformer-base-4096-spanish
## [Longformer](https://arxiv.org/abs/2004.05150) is a Transformer model for long documents.
`longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint (**BERTIN** in this case) and pre-trained for *MLM* on long documents (from BETO's `all_wikis`). It supports sequences of length up to 4,096!
**Longformer** uses a combination of a sliding window (*local*) attention and *global* attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
This model was made following the research done by [Iz Beltagy and Matthew E. Peters and Arman Cohan](https://arxiv.org/abs/2004.05150).
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2022longformer-base-4096-spanish,
title={Spanish LongFormer by Manuel Romero},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/longformer-base-4096-spanish}},
year={2022}
}
``` |
horsbug98/Part_2_XLM_Model_E1 | horsbug98 | 2022-03-30T18:29:46Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-16T17:32:47Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: debug_xlm_task2_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug_xlm_task2_1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
waboucay/camembert-base-finetuned-xnli_fr | waboucay | 2022-03-30T17:47:05Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-11T08:54:07Z | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 89.2 | 87.6 |
| test | 88.9 | 87.4 |
|
abdusah/aradia-ctc-v1 | abdusah | 2022-03-30T13:48:41Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"abdusahmbzuai/arabic_speech_massive_300hrs",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-23T10:58:05Z | ---
tags:
- automatic-speech-recognition
- abdusahmbzuai/arabic_speech_massive_300hrs
- generated_from_trainer
model-index:
- name: aradia-ctc-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aradia-ctc-v1
This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7171
- Wer: 0.3336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.22 | 100 | 5.1889 | 1.0 |
| No log | 0.43 | 200 | 3.1129 | 1.0 |
| No log | 0.65 | 300 | 3.0503 | 1.0 |
| No log | 0.87 | 400 | 3.0279 | 1.0 |
| 6.2756 | 1.09 | 500 | 2.9965 | 1.0 |
| 6.2756 | 1.3 | 600 | 2.3618 | 0.9993 |
| 6.2756 | 1.52 | 700 | 1.2715 | 0.8758 |
| 6.2756 | 1.74 | 800 | 0.9971 | 0.7156 |
| 6.2756 | 1.96 | 900 | 0.8927 | 0.6382 |
| 1.712 | 2.17 | 1000 | 0.8252 | 0.5926 |
| 1.712 | 2.39 | 1100 | 0.7794 | 0.5434 |
| 1.712 | 2.61 | 1200 | 0.7557 | 0.5092 |
| 1.712 | 2.83 | 1300 | 0.7347 | 0.5203 |
| 1.712 | 3.04 | 1400 | 0.7189 | 0.4929 |
| 0.9305 | 3.26 | 1500 | 0.6820 | 0.4595 |
| 0.9305 | 3.48 | 1600 | 0.6792 | 0.4504 |
| 0.9305 | 3.69 | 1700 | 0.6596 | 0.4442 |
| 0.9305 | 3.91 | 1800 | 0.6756 | 0.4432 |
| 0.9305 | 4.13 | 1900 | 0.6663 | 0.4392 |
| 0.737 | 4.35 | 2000 | 0.6479 | 0.4372 |
| 0.737 | 4.56 | 2100 | 0.6353 | 0.4203 |
| 0.737 | 4.78 | 2200 | 0.6251 | 0.4088 |
| 0.737 | 5.0 | 2300 | 0.6209 | 0.4177 |
| 0.737 | 5.22 | 2400 | 0.6639 | 0.4094 |
| 0.6247 | 5.43 | 2500 | 0.6408 | 0.3970 |
| 0.6247 | 5.65 | 2600 | 0.6373 | 0.3932 |
| 0.6247 | 5.87 | 2700 | 0.6411 | 0.3928 |
| 0.6247 | 6.09 | 2800 | 0.6378 | 0.3897 |
| 0.6247 | 6.3 | 2900 | 0.6396 | 0.3929 |
| 0.5443 | 6.52 | 3000 | 0.6544 | 0.3864 |
| 0.5443 | 6.74 | 3100 | 0.6218 | 0.3786 |
| 0.5443 | 6.96 | 3200 | 0.6200 | 0.3784 |
| 0.5443 | 7.17 | 3300 | 0.6157 | 0.3791 |
| 0.5443 | 7.39 | 3400 | 0.6317 | 0.3798 |
| 0.4845 | 7.61 | 3500 | 0.6540 | 0.3771 |
| 0.4845 | 7.83 | 3600 | 0.6436 | 0.3670 |
| 0.4845 | 8.04 | 3700 | 0.6335 | 0.3695 |
| 0.4845 | 8.26 | 3800 | 0.6579 | 0.3610 |
| 0.4845 | 8.48 | 3900 | 0.6170 | 0.3613 |
| 0.4279 | 8.69 | 4000 | 0.6523 | 0.3617 |
| 0.4279 | 8.91 | 4100 | 0.6349 | 0.3577 |
| 0.4279 | 9.13 | 4200 | 0.6344 | 0.3673 |
| 0.4279 | 9.35 | 4300 | 0.6215 | 0.3641 |
| 0.4279 | 9.56 | 4400 | 0.6513 | 0.3608 |
| 0.3825 | 9.78 | 4500 | 0.6386 | 0.3605 |
| 0.3825 | 10.0 | 4600 | 0.6724 | 0.3549 |
| 0.3825 | 10.22 | 4700 | 0.6776 | 0.3602 |
| 0.3825 | 10.43 | 4800 | 0.6739 | 0.3544 |
| 0.3825 | 10.65 | 4900 | 0.6688 | 0.3557 |
| 0.3477 | 10.87 | 5000 | 0.6674 | 0.3564 |
| 0.3477 | 11.09 | 5100 | 0.6786 | 0.3476 |
| 0.3477 | 11.3 | 5200 | 0.6818 | 0.3478 |
| 0.3477 | 11.52 | 5300 | 0.6874 | 0.3470 |
| 0.3477 | 11.74 | 5400 | 0.6993 | 0.3424 |
| 0.3101 | 11.96 | 5500 | 0.6950 | 0.3404 |
| 0.3101 | 12.17 | 5600 | 0.6872 | 0.3406 |
| 0.3101 | 12.39 | 5700 | 0.6846 | 0.3424 |
| 0.3101 | 12.61 | 5800 | 0.7051 | 0.3405 |
| 0.3101 | 12.83 | 5900 | 0.7051 | 0.3378 |
| 0.2859 | 13.04 | 6000 | 0.6955 | 0.3403 |
| 0.2859 | 13.26 | 6100 | 0.7115 | 0.3390 |
| 0.2859 | 13.48 | 6200 | 0.7074 | 0.3384 |
| 0.2859 | 13.69 | 6300 | 0.7002 | 0.3376 |
| 0.2859 | 13.91 | 6400 | 0.7171 | 0.3360 |
| 0.2714 | 14.13 | 6500 | 0.7193 | 0.3341 |
| 0.2714 | 14.35 | 6600 | 0.7132 | 0.3347 |
| 0.2714 | 14.56 | 6700 | 0.7184 | 0.3353 |
| 0.2714 | 14.78 | 6800 | 0.7171 | 0.3331 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huggingtweets/cnn | huggingtweets | 2022-03-30T13:44:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/cnn/1648647871411/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278259160644227073/MfCyF7CG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CNN</div>
<div style="text-align: center; font-size: 14px;">@cnn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CNN.
| Data | CNN |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 16 |
| Short tweets | 5 |
| Tweets kept | 3229 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/q0qwmbzx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cnn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ozw5h8lm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ozw5h8lm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cnn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
javilonso/classificationEsp1_Attraction | javilonso | 2022-03-30T13:25:38Z | 6 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-23T15:27:21Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: classificationEsp1_Attraction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# classificationEsp1_Attraction
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
shalpin87/dialoGPT-homer-simpson | shalpin87 | 2022-03-30T13:06:45Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"arxiv:1911.00536",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-29T20:28:40Z | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## dialogGPT-homer-simpson
This model has been fine tuned with the entire scripts of Homer Simpson from the T.V. show The Simpsons
It will give some nice answers seemingly from Homers brain in the Simpsons Universe during single turn conversation, letting you chat to Homer Simpson
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Who are you? |
| HomerBot | Homer Simpson .|
|User | What is your favorite Restaurant ? |
| HomerBot | Moes Tavern. |
|User | Have you ever been in a band?! |
| HomerBot | no. |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use Multi-Turn
#### NOTE: Multi-Turn seems to be broken, after a few exchanges the output will mostly be exclamation marks.
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("shalpin87/dialoGPT-homer-simpson")
model = AutoModelForCausalLM.from_pretrained("shalpin87/dialoGPT-homer-simpson")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoG-PT-HomerBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
### How to use Single Turn
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("shalpin87/dialoGPT-homer-simpson")
model = AutoModelForCausalLM.from_pretrained("shalpin87/dialoGPT-homer-simpson")
questions = [
"What is your name?",
"Who are you?",
"Where do you work?",
"Who really killed Mr Burns?",
"Have you ever stolen from the Kwik-E-Mart?",
"Did you kill Frank Grimes?",
"Who was the worst member of the Be Sharps?",
"Hey where did Barney go?",
"What is your favorite bar to have a beer?",
"What is the best beer in Springfield?",
"Is Bart working for the Mob?",
"I think there was an incident in sector 7 G",
"Is Ned Flanders house okay?",
"Oh my god it's Sideshow Bob",
"What is a Flaming Moe?",
"What is happening to Apu?",
"Who quit the band?"
]
# Let's chat
for step in range(len(questions)):
print("***************************************")
print("Q. {}".format(questions[step]))
new_user_input_ids = tokenizer.encode(questions[step] + tokenizer.eos_token, return_tensors='pt')
# new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
bot_input_ids = new_user_input_ids
num_return_seqs=1
chat_history_ids = model.generate(
bot_input_ids,
max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=200,
top_p=0.55,
temperature = 0.85,
num_return_sequences=num_return_seqs
)
botname = "HomerBot"
for i in range(0,num_return_seqs):
print("{}:{}: {}".format(i, botname, tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][i], skip_special_tokens=True)))
chat_history_ids = []
```
Sample Output
```
***************************************
Q. What is your name?
0:HomerBot: Homer Simpson.
***************************************
Q. Who are you?
0:HomerBot: Homer Simpson.
***************************************
Q. Where do you work?
0:HomerBot: I work at a nuclear power plant.
***************************************
Q. Who really killed Mr Burns?
0:HomerBot: I dunno. Some guy named Ot-to.
***************************************
Q. Have you ever stolen from the Kwik-E-Mart?
0:HomerBot: No.
***************************************
Q. Did you kill Frank Grimes?
0:HomerBot: No.
***************************************
Q. Who was the worst member of the Be Sharps?
0:HomerBot: Homer Simpson, sir.
***************************************
Q. Hey where did Barney go?
0:HomerBot: He's been eating that thing for a week.
***************************************
Q. What is your favorite bar to have a beer?
0:HomerBot: The Flanders' Club.
***************************************
Q. What is the best beer in Springfield?
0:HomerBot: Pepi!
***************************************
Q. Is Bart working for the Mob?
0:HomerBot: I think he's trying to tell us something.
***************************************
Q. I think there was an incident in sector 7 G
0:HomerBot: Uh, let's not jump to any conclusions. The dog didn't run away.
``` |
yinde/dummy-model | yinde | 2022-03-30T11:59:15Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-30T11:37:44Z | Fake news classifier
This model trains a text classification model to detect fake news articles,
it uses distilbert-base-uncased-finetuned-sst-2-english pretrained model to work on
fake and real news dataset from kaggle (https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset) |
Peltarion/xlm-roberta-longformer-base-4096 | Peltarion | 2022-03-30T09:23:58Z | 75 | 8 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"longformer",
"multilingual",
"dataset:wikitext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
tags:
- longformer
language: multilingual
license: apache-2.0
datasets:
- wikitext
---
## XLM-R Longformer Model
XLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus.
The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at [Peltarion](https://peltarion.com/) and was fine-tuned on multilingual quesion-answering tasks, with code available [here](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer#xlm-r).
Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps.
## How to Use
The model can be used as expected to fine-tune on a downstream task.
For instance for QA.
```python
import torch
from transformers import AutoModel, AutoTokenizer
MAX_SEQUENCE_LENGTH = 4096
MODEL_NAME_OR_PATH = "markussagen/xlm-roberta-longformer-base-4096"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
max_length=MAX_SEQUENCE_LENGTH,
padding="max_length",
truncation=True,
)
model = AutoModelForQuestionAnswering.from_pretrained(
MODEL_NAME_OR_PATH,
max_length=MAX_SEQUENCE_LENGTH,
)
```
## Training Procedure
The model have been trained on the WikiText-103 corpus, using a **48GB** GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full [training script](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer/blob/main/scripts/finetune_qa_models.py) and [Github repo](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer) for more information
```sh
wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
unzip wikitext-103-raw-v1.zip
export DATA_DIR=./wikitext-103-raw
scripts/run_long_lm.py \
--model_name_or_path xlm-roberta-base \
--model_name xlm-roberta-to-longformer \
--output_dir ./output \
--logging_dir ./logs \
--val_file_path $DATA_DIR/wiki.valid.raw \
--train_file_path $DATA_DIR/wiki.train.raw \
--seed 42 \
--max_pos 4096 \
--adam_epsilon 1e-8 \
--warmup_steps 500 \
--learning_rate 3e-5 \
--weight_decay 0.01 \
--max_steps 6000 \
--evaluate_during_training \
--logging_steps 50 \
--eval_steps 50 \
--save_steps 6000 \
--max_grad_norm 1.0 \
--per_device_eval_batch_size 2 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 64 \
--overwrite_output_dir \
--fp16 \
--do_train \
--do_eval
```
|
Aureliano/electra-if | Aureliano | 2022-03-30T09:07:27Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"feature-extraction",
"en",
"arxiv:1406.2661",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-11T15:40:21Z | ---
language: en
license: apache-2.0
---
## ELECTRA for IF
**ELECTRA** is a method for self-supervised language representation learning. They are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf).
For a detailed description and experimental results, please refer to the original paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
This repository contains a small ELECTRA discriminator finetuned on a corpus of interactive fiction commands labelled with the WordNet synset offset of the verb in the sentence. The original dataset has been collected from the list of action in the walkthroughs for the game included in the [Jericho](https://github.com/microsoft/jericho) framework and manually annotated. For more information visit https://github.com/aporporato/electra and https://github.com/aporporato/jericho-corpora.
## How to use the discriminator in `transformers`
(Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
```python
import math
import numpy as np
import tensorflow as tf
from datasets import load_metric, Dataset, DatasetDict
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer
from transformers.keras_callbacks import KerasMetricCallback
# This example shows how this model can be used:
# you should finetune the model of your specific corpus if commands, bigger than this
dict_train = {
"idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18",
"19", "20"],
"sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book",
"inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich",
"drop sandwich", "x sandwich", "agin"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04",
"drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02",
"inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"]
}
dict_val = {
"idx": ["0", "1", "2", "3", "4", "5"],
"sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"]
}
raw_train_dataset = Dataset.from_dict(dict_train)
raw_val_dataset = Dataset.from_dict(dict_val)
raw_dataset = DatasetDict()
raw_dataset["train"] = raw_train_dataset
raw_dataset["val"] = raw_val_dataset
raw_dataset = raw_dataset.class_encode_column("label")
print(raw_dataset)
print(raw_dataset["train"].features)
print(raw_dataset["val"].features)
print(raw_dataset["train"][1])
label2id = {}
id2label = {}
for i, l in enumerate(raw_dataset["train"].features["label"].names):
label2id[l] = i
id2label[i] = l
discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/electra-if",
label2id=label2id,
id2label=id2label)
tokenizer = AutoTokenizer.from_pretrained("Aureliano/electra-if")
tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True)
pre_tokenizer_columns = set(raw_dataset["train"].features)
encoded_dataset = raw_dataset.map(tokenize_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
batch_size = len(encoded_dataset["train"])
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
tf_validation_dataset = encoded_dataset["val"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_epochs = 25
batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size)
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=5e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps
)
metric = load_metric("accuracy")
def compute_metrics(eval_predictions):
logits, labels = eval_predictions
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset)
callbacks = [metric_callback]
discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"])
discriminator.fit(
tf_train_dataset,
epochs=num_epochs,
validation_data=tf_validation_dataset,
callbacks=callbacks
)
print("Evaluate on test data")
results = discriminator.evaluate(tf_validation_dataset)
print("test loss, test acc:", results)
text = "i"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset
text = "w"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset
```
|
javilonso/classificationPolEsp1 | javilonso | 2022-03-30T09:02:50Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-30T07:49:20Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/classificationPolEsp1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/classificationPolEsp1
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3728
- Validation Loss: 0.6217
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17958, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6282 | 0.6017 | 0 |
| 0.5129 | 0.6177 | 1 |
| 0.3728 | 0.6217 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
neibla/distilbert-base-uncased-finetuned-emotion | neibla | 2022-03-30T08:56:26Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-30T08:22:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254917237562972
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.855 | 1.0 | 250 | 0.3211 | 0.905 | 0.9017 |
| 0.2561 | 2.0 | 500 | 0.2187 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
loulou/distilbert-base-uncased-finetuned-emotion | loulou | 2022-03-30T04:57:58Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-22T04:55:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9221931901873676
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8366 | 1.0 | 250 | 0.3212 | 0.9025 | 0.8990 |
| 0.2588 | 2.0 | 500 | 0.2285 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lazyturtl/roomidentifier | lazyturtl | 2022-03-30T04:10:41Z | 89 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-30T04:10:32Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: roomidentifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
# roomidentifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bathroom

#### Bedroom

#### DinningRoom

#### Kitchen

#### LivingRoom
 |
samayash/finetuning-financial-news-sentiment | samayash | 2022-03-30T03:36:40Z | 4 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-30T03:27:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-financial-news-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-financial-news-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3345
- Accuracy: 0.8751
- F1: 0.8751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ntt123/hifigan_ljs_22k | ntt123 | 2022-03-30T01:47:26Z | 0 | 0 | null | [
"tensorboard",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-03-29T02:20:52Z | ---
license: cc-by-nc-sa-4.0
---
|
aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 | aaraki | 2022-03-30T01:41:47Z | 8,239 | 10 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-30T00:18:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2564
- Accuracy: 0.9788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4291 | 1.0 | 390 | 0.2564 | 0.9788 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cammiemw/bert-marco-hdct | cammiemw | 2022-03-30T01:21:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-30T01:09:55Z | ---
license: cc-by-nc-4.0
---
|
DrishtiSharma/poem-gen-spanish-t5-small-v6 | DrishtiSharma | 2022-03-29T23:45:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-29T18:58:46Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-v6
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.8551 | 0.73 | 30000 | 2.9296 |
| 2.6961 | 1.46 | 60000 | 2.9005 |
| 2.5756 | 2.19 | 90000 | 2.8786 |
| 2.5095 | 2.93 | 120000 | 2.8621 |
| 2.4061 | 3.66 | 150000 | 2.8830 |
| 2.3161 | 4.39 | 180000 | 2.8865 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/poem-gen-spanish-t5-small-v5 | DrishtiSharma | 2022-03-29T23:25:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-29T18:54:38Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-v5
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000125
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.9366 | 0.73 | 30000 | 2.9656 |
| 2.7518 | 1.46 | 60000 | 2.9120 |
| 2.6018 | 2.19 | 90000 | 2.8870 |
| 2.5262 | 2.93 | 120000 | 2.8646 |
| 2.3886 | 3.66 | 150000 | 2.8816 |
| 2.2758 | 4.39 | 180000 | 2.8900 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
efederici/sentence-it5-base | efederici | 2022-03-29T23:09:01Z | 35 | 4 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-29T19:57:59Z | ---
pipeline_tag: sentence-similarity
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-IT5-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-base)) base model. It is trained on a dataset made from question/context pairs ([squad-it](https://github.com/crux82/squad-it)), tags/news-article pairs, headline/text pairs ([change-it](https://huggingface.co/datasets/gsarti/change_it)) and on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-IT5-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-base')
model = AutoModel.from_pretrained('efederici/sentence-IT5-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
espnet/bur_openslr80_hubert | espnet | 2022-03-29T22:19:50Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-28T22:04:54Z | <!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Mar 21 22:59:35 UTC 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `7ae4efd81778436a98b822483e8123adba6aa430`
- Commit date: `Tue Mar 15 20:11:18 2022 -0400`
## asr_train_asr_hubert_transformer_adam_specaug_raw_bpe150
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|4227|39.1|50.4|10.5|6.1|67.0|99.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|33345|82.2|7.6|10.1|3.6|21.4|99.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|18237|70.7|17.7|11.6|2.5|31.8|99.8|
|
BigSalmon/PointsOneSent | BigSalmon | 2022-03-29T21:26:49Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-29T21:19:54Z | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsOneSent")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsOneSent")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
-
```
It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27 |
Chikashi/t5-small-finetuned-cnndm_3epoch | Chikashi | 2022-03-29T19:28:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-29T00:14:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm_3epoch
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.5435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm_3epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6622
- Rouge1: 24.5435
- Rouge2: 11.7919
- Rougel: 20.2929
- Rougelsum: 23.1661
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9113 | 0.14 | 5000 | 1.7162 | 24.4374 | 11.6932 | 20.1741 | 23.0427 | 18.9997 |
| 1.8772 | 0.28 | 10000 | 1.7008 | 24.3715 | 11.6699 | 20.1387 | 22.9772 | 18.9997 |
| 1.8609 | 0.42 | 15000 | 1.6911 | 24.4174 | 11.6986 | 20.1756 | 23.0205 | 18.9997 |
| 1.8564 | 0.56 | 20000 | 1.6871 | 24.4374 | 11.6801 | 20.1663 | 23.0366 | 18.9995 |
| 1.8495 | 0.7 | 25000 | 1.6796 | 24.4019 | 11.6901 | 20.177 | 23.034 | 18.999 |
| 1.8448 | 0.84 | 30000 | 1.6787 | 24.4813 | 11.7227 | 20.1985 | 23.0847 | 18.999 |
| 1.8427 | 0.98 | 35000 | 1.6762 | 24.4905 | 11.7591 | 20.2548 | 23.1006 | 18.9993 |
| 1.8341 | 1.11 | 40000 | 1.6747 | 24.4743 | 11.7124 | 20.1782 | 23.0726 | 18.9996 |
| 1.822 | 1.25 | 45000 | 1.6753 | 24.4797 | 11.7292 | 20.2319 | 23.0816 | 18.9993 |
| 1.8262 | 1.39 | 50000 | 1.6713 | 24.4865 | 11.7079 | 20.2214 | 23.0919 | 18.9986 |
| 1.8281 | 1.53 | 55000 | 1.6702 | 24.5095 | 11.7364 | 20.2534 | 23.1264 | 18.9991 |
| 1.8228 | 1.67 | 60000 | 1.6678 | 24.5153 | 11.7595 | 20.2544 | 23.1138 | 18.9993 |
| 1.824 | 1.81 | 65000 | 1.6662 | 24.5324 | 11.7804 | 20.2671 | 23.1498 | 18.9997 |
| 1.8265 | 1.95 | 70000 | 1.6648 | 24.5795 | 11.7917 | 20.2935 | 23.1855 | 18.9992 |
| 1.8179 | 2.09 | 75000 | 1.6658 | 24.5426 | 11.804 | 20.2861 | 23.1586 | 18.9996 |
| 1.8147 | 2.23 | 80000 | 1.6646 | 24.5429 | 11.7914 | 20.2889 | 23.1542 | 18.9993 |
| 1.8026 | 2.37 | 85000 | 1.6632 | 24.5451 | 11.8045 | 20.2781 | 23.1555 | 18.9996 |
| 1.8141 | 2.51 | 90000 | 1.6643 | 24.5078 | 11.7781 | 20.2631 | 23.121 | 18.9996 |
| 1.8124 | 2.65 | 95000 | 1.6628 | 24.5728 | 11.7958 | 20.2875 | 23.178 | 18.9996 |
| 1.8098 | 2.79 | 100000 | 1.6635 | 24.5534 | 11.7998 | 20.2979 | 23.169 | 18.9996 |
| 1.8153 | 2.93 | 105000 | 1.6622 | 24.5435 | 11.7919 | 20.2929 | 23.1661 | 18.9996 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
efederici/sentence-it5-small | efederici | 2022-03-29T17:29:14Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-27T15:19:10Z | ---
pipeline_tag: sentence-similarity
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-IT5-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-small)) small model trained for asymmetric semantic search. Query is a keyword, Paragraph is a short news article.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-IT5-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-small')
model = AutoModel.from_pretrained('efederici/sentence-IT5-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
GleamEyeBeast/ascend | GleamEyeBeast | 2022-03-29T16:49:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-29T01:37:59Z | ---
tags:
- generated_from_trainer
model-index:
- name: ascend
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ascend
This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3718
- Wer: 0.6412
- Cer: 0.2428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.5769 | 1.0 | 688 | 1.1864 | 0.7716 | 0.3159 |
| 0.5215 | 2.0 | 1376 | 1.1613 | 0.7504 | 0.2965 |
| 0.4188 | 3.0 | 2064 | 1.1644 | 0.7389 | 0.2950 |
| 0.3695 | 4.0 | 2752 | 1.1937 | 0.7184 | 0.2815 |
| 0.3404 | 5.0 | 3440 | 1.1947 | 0.7083 | 0.2719 |
| 0.2885 | 6.0 | 4128 | 1.2314 | 0.7108 | 0.2685 |
| 0.2727 | 7.0 | 4816 | 1.2243 | 0.6850 | 0.2616 |
| 0.2417 | 8.0 | 5504 | 1.2506 | 0.6767 | 0.2608 |
| 0.2207 | 9.0 | 6192 | 1.2804 | 0.6922 | 0.2595 |
| 0.2195 | 10.0 | 6880 | 1.2582 | 0.6818 | 0.2575 |
| 0.1896 | 11.0 | 7568 | 1.3101 | 0.6814 | 0.2545 |
| 0.1961 | 12.0 | 8256 | 1.2793 | 0.6706 | 0.2526 |
| 0.1752 | 13.0 | 8944 | 1.2643 | 0.6584 | 0.2509 |
| 0.1638 | 14.0 | 9632 | 1.3152 | 0.6588 | 0.2482 |
| 0.1522 | 15.0 | 10320 | 1.3098 | 0.6433 | 0.2439 |
| 0.1351 | 16.0 | 11008 | 1.3253 | 0.6537 | 0.2447 |
| 0.1266 | 17.0 | 11696 | 1.3394 | 0.6365 | 0.2418 |
| 0.1289 | 18.0 | 12384 | 1.3718 | 0.6412 | 0.2443 |
| 0.1204 | 19.0 | 13072 | 1.3708 | 0.6433 | 0.2433 |
| 0.1189 | 20.0 | 13760 | 1.3718 | 0.6412 | 0.2428 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
tbosse/bert-base-german-cased-finetuned-subj_v1 | tbosse | 2022-03-29T15:59:49Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-29T14:22:30Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v1
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1594
- Precision: 0.1875
- Recall: 0.0077
- F1: 0.0147
- Accuracy: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 136 | 0.1591 | 1.0 | 0.0051 | 0.0102 | 0.9523 |
| No log | 2.0 | 272 | 0.1571 | 0.375 | 0.0077 | 0.015 | 0.9518 |
| No log | 3.0 | 408 | 0.1594 | 0.1875 | 0.0077 | 0.0147 | 0.9508 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sayef/fsner-bert-base-uncased | sayef | 2022-03-29T14:20:35Z | 9 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2008.10570",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | # FSNER
Implemented by [sayef](https://huggingface.co/sayef).
# Overview
The FSNER model was proposed in [Example-Based Named Entity Recognition](https://arxiv.org/abs/2008.10570) by Morteza
Ziyadi, Yuting Sun, Abhishek Goswami, Jade Huang, Weizhu Chen. To identify entity spans in a new domain, it uses a
train-free few-shot learning approach inspired by question-answering.
## Abstract
> We present a novel approach to named entity recognition (NER) in the presence of scarce data that we call example-based NER. Our train-free few-shot learning approach takes inspiration from question-answering to identify entity spans in a new and unseen domain. In comparison with the current state-of-the-art, the proposed method performs significantly better, especially when using a low number of support examples.
## Model Training Details
| identifier | epochs | datasets |
| ---------- |:------:|:-----------------------------------------------------------------------------------------------:|
| [sayef/fsner-bert-base-uncased](https://huggingface.co/sayef/fsner-bert-base-uncased) | 25 | ontonotes5, conll2003, wnut2017, mit_movie_trivia, mit_restaurant and fin (Alvarado et al.). |
## Installation and Example Usage
You can use the FSNER model in 3 ways:
1. Install directly from PyPI: `pip install fsner` and import the model as shown in the code example below
or
2. Install from source: `python install .` and import the model as shown in the code example below
or
3. Clone [repo](https://github.com/sayef/fsner) and add absolute path of `fsner/src` directory to your PYTHONPATH and
import the model as shown in the code example below
```python
import json
from fsner import FSNERModel, FSNERTokenizerUtils, pretty_embed
query_texts = [
"Does Luke's serve lunch?",
"Chang does not speak Taiwanese very well.",
"I like Berlin."
]
# Each list in supports are the examples of one entity type
# Wrap entities around with [E] and [/E] in the examples.
# Each sentence should have only one pair of [E] ... [/E]
support_texts = {
"Restaurant": [
"What time does [E] Subway [/E] open for breakfast?",
"Is there a [E] China Garden [/E] restaurant in newark?",
"Does [E] Le Cirque [/E] have valet parking?",
"Is there a [E] McDonalds [/E] on main street?",
"Does [E] Mike's Diner [/E] offer huge portions and outdoor dining?"
],
"Language": [
"Although I understood no [E] French [/E] in those days , I was prepared to spend the whole day with Chien - chien .",
"like what the hell 's that called in [E] English [/E] ? I have to register to be here like since I 'm a foreigner .",
"So , I 'm also working on an [E] English [/E] degree because that 's my real interest .",
"Al - Jazeera TV station , established in November 1996 in Qatar , is an [E] Arabic - language [/E] news TV station broadcasting global news and reports nonstop around the clock .",
"They think it 's far better for their children to be here improving their [E] English [/E] than sitting at home in front of a TV . \"",
"The only solution seemed to be to have her learn [E] French [/E] .",
"I have to read sixty pages of [E] Russian [/E] today ."
]
}
device = 'cpu'
tokenizer = FSNERTokenizerUtils("sayef/fsner-bert-base-uncased")
queries = tokenizer.tokenize(query_texts).to(device)
supports = tokenizer.tokenize(list(support_texts.values())).to(device)
model = FSNERModel("sayef/fsner-bert-base-uncased")
model.to(device)
p_starts, p_ends = model.predict(queries, supports)
# One can prepare supports once and reuse multiple times with different queries
# ------------------------------------------------------------------------------
# start_token_embeddings, end_token_embeddings = model.prepare_supports(supports)
# p_starts, p_ends = model.predict(queries, start_token_embeddings=start_token_embeddings,
# end_token_embeddings=end_token_embeddings)
output = tokenizer.extract_entity_from_scores(query_texts, queries, p_starts, p_ends,
entity_keys=list(support_texts.keys()), thresh=0.50)
print(json.dumps(output, indent=2))
# install displacy for pretty embed
pretty_embed(query_texts, output, list(support_texts.keys()))
```
<!DOCTYPE html>
<html lang="en">
<head>
<title>displaCy</title>
</head>
<body style="font-size: 16px; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; padding: 4rem 2rem; direction: ltr">
<figure style="margin-bottom: 6rem">
<div class="entities" style="line-height: 2.5; direction: ltr">
<div class="entities" style="line-height: 2.5; direction: ltr">Does
<mark class="entity" style="background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Luke's
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Restaurant</span>
</mark>
serve lunch?</div>
<div class="entities" style="line-height: 2.5; direction: ltr">Chang does not speak
<mark class="entity" style="background: #bfeeb7; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Taiwanese
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Language</span>
</mark>
very well.</div>
<div class="entities" style="line-height: 2.5; direction: ltr">I like Berlin.</div>
</div>
</figure>
</body>
</html>
## Datasets preparation
1. We need to convert dataset into the following format. Let's say we have a dataset file train.json like following.
2. Each list in supports are the examples of one entity type
3. Wrap entities around with [E] and [/E] in the examples.
4. Each example should have only one pair of [E] ... [/E].
```json
{
"CARDINAL_NUMBER": [
"Washington , cloudy , [E] 2 [/E] to 6 degrees .",
"New Dehli , sunny , [E] 6 [/E] to 19 degrees .",
"Well this is number [E] two [/E] .",
"....."
],
"LANGUAGE": [
"They do n't have the Quicken [E] Dutch [/E] version ?",
"they learned a lot of [E] German [/E] .",
"and then [E] Dutch [/E] it 's Mifrau",
"...."
],
"MONEY": [
"Per capita personal income ranged from $ [E] 11,116 [/E] in Mississippi to $ 23,059 in Connecticut ... .",
"The trade surplus was [E] 582 million US dollars [/E] .",
"It settled with a loss of 4.95 cents at $ [E] 1.3210 [/E] a pound .",
"...."
]
}
```
2. Converted ontonotes5 dataset can be found here:
1. [train](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.train.json)
2. [dev](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.dev.json)
3. Then trainer script can be used to train/evaluate your fsner model.
```bash
fsner trainer --pretrained-model bert-base-uncased --mode train --train-data train.json --val-data val.json \
--train-batch-size 6 --val-batch-size 6 --n-examples-per-entity 10 --neg-example-batch-ratio 1/3 --max-epochs 25 --device gpu \
--gpus -1 --strategy ddp
``` |
Daryaflp/roberta-retrained_ru_covid_papers | Daryaflp | 2022-03-29T13:30:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-29T07:12:02Z | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-retrained_ru_covid_papers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-retrained_ru_covid_papers
This model is a fine-tuned version of [Daryaflp/roberta-retrained_ru_covid](https://huggingface.co/Daryaflp/roberta-retrained_ru_covid) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ArtemChistyakov-2/f | ArtemChistyakov-2 | 2022-03-29T12:21:18Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-03-29T12:21:18Z | ---
license: apache-2.0
---
|
gayanin/bart-med-term-conditional-masking-0 | gayanin | 2022-03-29T12:03:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-28T22:12:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-med-term-conditional-masking-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-med-term-conditional-masking-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5041
- Rouge2 Precision: 0.7497
- Rouge2 Recall: 0.5246
- Rouge2 Fmeasure: 0.5986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6381 | 1.0 | 13915 | 0.5595 | 0.734 | 0.5152 | 0.5873 |
| 0.5429 | 2.0 | 27830 | 0.5243 | 0.7441 | 0.5225 | 0.5956 |
| 0.5002 | 3.0 | 41745 | 0.5078 | 0.7482 | 0.5238 | 0.5976 |
| 0.4607 | 4.0 | 55660 | 0.5041 | 0.7497 | 0.5246 | 0.5986 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms | scasutt | 2022-03-29T11:29:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-28T18:54:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5945
- Wer: 0.4929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4049 | 1.05 | 250 | 3.3497 | 1.0 |
| 3.0851 | 2.1 | 500 | 3.4440 | 1.0 |
| 2.3512 | 3.15 | 750 | 1.5938 | 0.9317 |
| 1.1762 | 4.2 | 1000 | 0.8481 | 0.7333 |
| 0.903 | 5.25 | 1250 | 0.7180 | 0.6484 |
| 0.6754 | 6.3 | 1500 | 0.6603 | 0.6044 |
| 0.5961 | 7.35 | 1750 | 0.6410 | 0.5778 |
| 0.5325 | 8.4 | 2000 | 0.6245 | 0.5545 |
| 0.4685 | 9.45 | 2250 | 0.5925 | 0.5359 |
| 0.4526 | 10.5 | 2500 | 0.5991 | 0.5345 |
| 0.3975 | 11.55 | 2750 | 0.5916 | 0.5228 |
| 0.3672 | 12.6 | 3000 | 0.5882 | 0.5037 |
| 0.3774 | 13.65 | 3250 | 0.5693 | 0.5028 |
| 0.3489 | 14.7 | 3500 | 0.5645 | 0.5018 |
| 0.3593 | 15.75 | 3750 | 0.5977 | 0.5043 |
| 0.3167 | 16.81 | 4000 | 0.6049 | 0.5018 |
| 0.3225 | 17.86 | 4250 | 0.6172 | 0.4921 |
| 0.2807 | 18.91 | 4500 | 0.5937 | 0.4923 |
| 0.2889 | 19.96 | 4750 | 0.5945 | 0.4929 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
KeithHorgan/TweetClimateAnalysis | KeithHorgan | 2022-03-29T10:01:24Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:KeithHorgan98/autotrain-data-TweetClimateAnalysis",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-29T10:16:42Z | ---
tags: autotrain
language: unk
widget:
- text: "Climate Change is a hoax"
- text: "It is freezing, where is global warming"
datasets:
- KeithHorgan98/autotrain-data-TweetClimateAnalysis
co2_eq_emissions: 133.19491276284793
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 678720226
- CO2 Emissions (in grams): 133.19491276284793
## Validation Metrics
- Loss: 0.4864234924316406
- Accuracy: 0.865424430641822
- Macro F1: 0.7665472174344069
- Micro F1: 0.8654244306418221
- Weighted F1: 0.8586375445115083
- Macro Precision: 0.8281449061702826
- Micro Precision: 0.865424430641822
- Weighted Precision: 0.8619727477790186
- Macro Recall: 0.736576343905098
- Micro Recall: 0.865424430641822
- Weighted Recall: 0.865424430641822
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/KeithHorgan98/autotrain-TweetClimateAnalysis-678720226
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
ai4bharat/MultiIndicWikiBioUnified | ai4bharat | 2022-03-29T09:25:58Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"wikibio",
"multilingual",
"nlp",
"indicnlp",
"as",
"bn",
"hi",
"kn",
"ml",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicWikiBio",
"arxiv:2203.05437",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-16T11:35:33Z | ---
tags:
- wikibio
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicWikiBio
language:
- as
- bn
- hi
- kn
- ml
- or
- pa
- ta
- te
licenses:
- cc-by-nc-4.0
widget:
- <TAG> name </TAG> नवतेज भारती <TAG> image </TAG> NavtejBharati . jpg <TAG> birth name </TAG> नवतेज <TAG> birth date </TAG> 1938 <TAG> birth place </TAG> रोडे , भारतीय पंजाब , भारत । पंजाब <TAG> occupation </TAG> लेखक , कवि <TAG> nationality </TAG> कैनेडा । कैनेडियन <TAG> ethnicity </TAG> पंजाबी लोक । पंजाबी </s> <2hi>
---
# MultiIndicWikiBioUnified
MultiIndicWikiBioUnified is a multilingual, sequence-to-sequence pre-trained model, a [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint fine-tuned on the 9 languages of [IndicWikiBio](https://huggingface.co/datasets/ai4bharat/IndicWikiBio) dataset. For fine-tuning details,
see the [paper](https://arxiv.org/abs/2203.05437). You can use MultiIndicWikiBio to build biography generation applications for Indian languages by fine-tuning the model with supervised training data. Some salient features of the MultiIndicWikiBio are:
<ul>
<li >Supported languages: Assamese, Bengali, Hindi, Oriya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for fine-tuning and decoding. </li>
<li> Fine-tuned on an Indic language corpora (34,653 examples). </li>
<li> All languages have been represented in Devanagari script to encourage transfer learning among the related languages. </li>
</ul>
You can read more about MultiIndicWikiBioUnified in this <a href="https://arxiv.org/abs/2203.05437">paper</a>.
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicWikiBioUnified")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicWikiBioUnified")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2hi>', '<2kn>', '<2ml>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("<TAG> name </TAG> भीखा लाल <TAG> office </TAG> विधायक - 318 - हसनगंज विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1957 से 1962 <TAG> nationality </TAG> भारतीय</s><2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
out = tokenizer("<2hi> भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे।
# Disclaimer
Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the [Indic NLP Library](https://github.com/AI4Bharat/indic-bart/blob/main/indic_scriptmap.py).
```
# Note:
If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
## Benchmarks
Scores on the `IndicWikiBio` test sets are as follows:
Language | RougeL
---------|----------------------------
as | 56.28
bn | 57.42
hi | 67.48
kn | 40.01
ml | 38.84
or | 67.13
pa | 52.88
ta | 51.82
te | 51.43
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
# License
The model is available under the MIT License. |
Davlan/m2m100_418M-eng-yor-mt | Davlan | 2022-03-29T09:21:53Z | 820 | 1 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# m2m100_418M-eng-yor-mt
## Model description
**m2m100_418M-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt).
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/m2m100_418M-yor-eng-mt | Davlan | 2022-03-29T09:21:03Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# m2m100_418M-eng-yor-mt
## Model description
**m2m100_418M-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt).
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **16.76 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
PereLluis13/Wav2Vec2-Large-XLSR-53-catalan | PereLluis13 | 2022-03-29T08:51:28Z | 6,942 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ca",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language: ca
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Catalan XLSR Wav2Vec Large 53 #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53`
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ca
type: common_voice
args: ca #TODO:
metrics:
- name: Test WER
type: wer
value: 8.11
---
# Disclaimer
This model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking [wav2vec2-xls-r-1b-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-1b-ca-lm) which is a 1b model with a LM on top trained on CV8+ with much better performance or [wav2vec2-xls-r-300m-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) which has the same size (300m) as this model but trained on CV8+ and the same LM.
# Wav2Vec2-Large-XLSR-53-ca
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on catalan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan")
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the catalan test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ca", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan")
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
import jiwer
# Chunk WER computation due to memory issues, taken from https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000)))
```
**Test Result**: 8.11 %
## Training
The Common Voice `train`, `validation` datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up.
The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). Slight modifications were done in order to speed up the ordering by length during training, which can be found [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/6). Another version trained for catalan can be found [here](https://huggingface.co/ccoreilly/wav2vec2-large-xlsr-catala), which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset. |
PereLluis13/wav2vec2-xls-r-1b-ca | PereLluis13 | 2022-03-29T08:44:49Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"collectivat/tv3_parla",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"projecte-aina/parlament_parla",
"robust-speech-event",
"ca",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:collectivat/tv3_parla",
"dataset:projecte-aina/parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language:
- ca
license: apache-2.0
tags:
- automatic-speech-recognition
- collectivat/tv3_parla
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- projecte-aina/parlament_parla
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
- collectivat/tv3_parla
- projecte-aina/parlament_parla
model-index:
- name: wav2vec2-xls-r-1b-ca
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_8_0 ca
type: mozilla-foundation/common_voice_8_0
args: ca
metrics:
- name: Test WER
type: wer
value: 11.030639657300516
- name: Test CER
type: cer
value: 2.8405630530040634
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: projecte-aina/parlament_parla ca
type: projecte-aina/parlament_parla
args: clean
metrics:
- name: Test WER
type: wer
value: 6.483115660665961
- name: Test CER
type: cer
value: 2.0212863746191828
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: collectivat/tv3_parla ca
type: collectivat/tv3_parla
args: ca
metrics:
- name: Test WER
type: wer
value: 17.917773414943988
- name: Test CER
type: cer
value: 8.872589572206396
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Catalan Dev Data
type: speech-recognition-community-v2/dev_data
args: ca
metrics:
- name: Test WER
type: wer
value: 27.126683954209097
- name: Test CER
type: cer
value: 14.213308815078726
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ca
metrics:
- name: Test WER
type: wer
value: 18.7
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-ca
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets.
## Model description
Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model.
## Intended uses & limitations
As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.
## Training and evaluation data
## Training procedure
The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py).
### Training results
Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
# Thanks
Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible. |
PereLluis13/wav2vec2-xls-r-300m-ca | PereLluis13 | 2022-03-29T08:43:53Z | 52 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"collectivat/tv3_parla",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"projecte-aina/parlament_parla",
"robust-speech-event",
"ca",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:collectivat/tv3_parla",
"dataset:projecte-aina/parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language:
- ca
license: apache-2.0
tags:
- automatic-speech-recognition
- collectivat/tv3_parla
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- projecte-aina/parlament_parla
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
- collectivat/tv3_parla
- projecte-aina/parlament_parla
model-index:
- name: wav2vec2-xls-r-300m-ca
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_8_0 ca
type: mozilla-foundation/common_voice_8_0
args: ca
metrics:
- name: Test WER
type: wer
value: 13.170091241317552
- name: Test CER
type: cer
value: 3.356726205534543
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: projecte-aina/parlament_parla ca
type: projecte-aina/parlament_parla
args: clean
metrics:
- name: Test WER
type: wer
value: 8.048005647723261
- name: Test CER
type: cer
value: 2.240912911020065
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: collectivat/tv3_parla ca
type: collectivat/tv3_parla
args: ca
metrics:
- name: Test WER
type: wer
value: 23.320629787889285
- name: Test CER
type: cer
value: 10.439216202089989
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: speech-recognition-community-v2/dev_data ca
type: speech-recognition-community-v2/dev_data
args: ca
metrics:
- name: Test WER
type: wer
value: 31.99671115046487
- name: Test CER
type: cer
value: 15.820020687277325
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ca
metrics:
- name: Test WER
type: wer
value: 22.04
---
# wav2vec2-xls-r-300m-ca
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets.
It achieves the following results on the evaluation set (for the three datasets):
- Loss: 0.2472
- Wer: 0.1499
## Model description
Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model.
## Intended uses & limitations
As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.
## Training and evaluation data
More information needed
## Training procedure
The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 18.0
- mixed_precision_training: Native AMP
### Training results
Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.2099 | 0.09 | 500 | 3.4125 | 1.0 |
| 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 |
| 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 |
| 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 |
| 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 |
| 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 |
| 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 |
| 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 |
| 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 |
| 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 |
| 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 |
| 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 |
| 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 |
| 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 |
| 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 |
| 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 |
| 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 |
| 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 |
| 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 |
| 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 |
| 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 |
| 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 |
| 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 |
| 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 |
| 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 |
| 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 |
| 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 |
| 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 |
| 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 |
| 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 |
| 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 |
| 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 |
| 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 |
| 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 |
| 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 |
| 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 |
| 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 |
| 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 |
| 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 |
| 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 |
| 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 |
| 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 |
| 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 |
| 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 |
| 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 |
| 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 |
| 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 |
| 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 |
| 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 |
| 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 |
| 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 |
| 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 |
| 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 |
| 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 |
| 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 |
| 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 |
| 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 |
| 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 |
| 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 |
| 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 |
| 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 |
| 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
# Thanks
Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
|
PereLluis13/wav2vec2-xls-r-300m-ca-lm | PereLluis13 | 2022-03-29T08:42:55Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"collectivat/tv3_parla",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"projecte-aina/parlament_parla",
"robust-speech-event",
"ca",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:collectivat/tv3_parla",
"dataset:projecte-aina/parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language:
- ca
license: apache-2.0
tags:
- automatic-speech-recognition
- collectivat/tv3_parla
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- projecte-aina/parlament_parla
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
- collectivat/tv3_parla
- projecte-aina/parlament_parla
model-index:
- name: wav2vec2-xls-r-300m-ca-lm
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_8_0 ca
type: mozilla-foundation/common_voice_8_0
args: ca
metrics:
- name: Test WER
type: wer
value: 6.771703090587865
- name: Test CER
type: cer
value: 2.1007777843712293
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: projecte-aina/parlament_parla ca
type: projecte-aina/parlament_parla
args: clean
metrics:
- name: Test WER
type: wer
value: 5.565360630662431
- name: Test CER
type: cer
value: 1.8594390167034354
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: collectivat/tv3_parla ca
type: collectivat/tv3_parla
args: ca
metrics:
- name: Test WER
type: wer
value: 13.53312545713516
- name: Test CER
type: cer
value: 8.684635913340556
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Catalan Dev Data
type: speech-recognition-community-v2/dev_data
args: ca
metrics:
- name: Test WER
type: wer
value: 26.04515843400164
- name: Test CER
type: cer
value: 15.056890012642224
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ca
metrics:
- name: Test WER
type: wer
value: 17.68
---
# wav2vec2-xls-r-300m-ca-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets.
It achieves the following results on the evaluation set (for the three datasets and without the LM):
- Loss: 0.2472
- Wer: 0.1499
## Model description
Please check the original [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) Model card. This is just a finetuned version of that model.
## Intended uses & limitations
As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.
## Training and evaluation data
More information needed
## Training procedure
The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 18.0
- mixed_precision_training: Native AMP
### Training results
Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.2099 | 0.09 | 500 | 3.4125 | 1.0 |
| 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 |
| 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 |
| 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 |
| 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 |
| 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 |
| 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 |
| 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 |
| 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 |
| 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 |
| 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 |
| 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 |
| 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 |
| 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 |
| 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 |
| 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 |
| 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 |
| 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 |
| 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 |
| 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 |
| 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 |
| 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 |
| 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 |
| 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 |
| 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 |
| 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 |
| 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 |
| 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 |
| 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 |
| 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 |
| 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 |
| 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 |
| 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 |
| 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 |
| 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 |
| 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 |
| 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 |
| 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 |
| 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 |
| 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 |
| 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 |
| 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 |
| 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 |
| 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 |
| 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 |
| 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 |
| 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 |
| 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 |
| 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 |
| 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 |
| 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 |
| 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 |
| 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 |
| 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 |
| 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 |
| 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 |
| 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 |
| 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 |
| 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 |
| 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 |
| 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 |
| 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
# Thanks
Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
|
STARBORN/MMC | STARBORN | 2022-03-29T07:14:35Z | 0 | 1 | null | [
"license:mit",
"region:us"
] | null | 2022-03-29T07:12:26Z | ---
license: mit
---
Metamodel Card (MMC) builds on MC and DC schemas by adding system level abstraction to the data. MMC instantiations follow |
jorge-henao/gpt2-small-spanish-disco-poetry-15 | jorge-henao | 2022-03-29T05:17:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-29T04:20:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-spanish-disco-poetry-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-spanish-disco-poetry-15
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
rampasek/prot_bert_bfd_rosetta204060aa | rampasek | 2022-03-29T04:35:10Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"protein language model",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-29T04:02:40Z | ---
language: protein
tags:
- protein language model
datasets:
- BFD
- Custom Rosetta
---
# ProtBert-BFD finetuned on Rosetta 20,40,60AA dataset
This model is finetuned to predict Rosetta fold energy using a dataset of 300k protein sequences:
100k of 20AA, 100k of 40AA, and 100k of 60AA
Current model in this repo: `prot_bert_bfd-finetuned-032822_1323`
## Performance
- 20AA sequences (1k eval set):\
Metrics: 'mae': 0.100418, 'r2': 0.989028, 'mse': 0.016266, 'rmse': 0.127537
- 40AA sequences (10k eval set):\
Metrics: 'mae': 0.173888, 'r2': 0.963361, 'mse': 0.048218, 'rmse': 0.219587
- 60AA sequences (10k eval set):\
Metrics: 'mae': 0.235238, 'r2': 0.930164, 'mse': 0.088131, 'rmse': 0.2968
## `prot_bert_bfd` from ProtTrans
The starting pretrained model is from ProtTrans, trained on 2.1 billion proteins from BFD.
It was trained on protein sequences using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in
[this repository](https://github.com/agemagician/ProtTrans).
> Created by [Ladislav Rampasek](https://rampasek.github.io)
|
Subsets and Splits