modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Ayham/ernie_roberta_summarization_cnn_dailymail | Ayham | 2022-03-04T01:47:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-03T18:05:21Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: ernie_roberta_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ernie_roberta_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
batterydata/batteryscibert-cased-squad-v1 | batterydata | 2022-03-03T20:29:14Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: en
tags: question answering
license: apache-2.0
datasets:
- squad
- batterydata/battery-device-data-qa
metrics: squad
---
# BatterySciBERT-cased for QA
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batteryscibert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.66,
"f1": 87.43,
```
Evaluated on the battery device dataset.
```
"precision": 65.09,
"recall": 84.56,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
Kevincp560/wikihow-t5-small-finetuned-pubmed | Kevincp560 | 2022-03-03T20:22:04Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:pub_med_summarization_dataset",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-03T19:09:22Z | ---
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: wikihow-t5-small-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 8.9619
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wikihow-t5-small-finetuned-pubmed
This model is a fine-tuned version of [deep-learning-analytics/wikihow-t5-small](https://huggingface.co/deep-learning-analytics/wikihow-t5-small) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2702
- Rouge1: 8.9619
- Rouge2: 3.2719
- Rougel: 8.1558
- Rougelsum: 8.5714
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.5984 | 1.0 | 4000 | 2.3696 | 10.237 | 3.8609 | 8.9776 | 9.677 | 19.0 |
| 2.5677 | 2.0 | 8000 | 2.3132 | 9.302 | 3.4499 | 8.3816 | 8.8831 | 19.0 |
| 2.5038 | 3.0 | 12000 | 2.2884 | 9.0578 | 3.3103 | 8.23 | 8.6723 | 19.0 |
| 2.4762 | 4.0 | 16000 | 2.2758 | 9.0001 | 3.2882 | 8.1845 | 8.6084 | 19.0 |
| 2.4393 | 5.0 | 20000 | 2.2702 | 8.9619 | 3.2719 | 8.1558 | 8.5714 | 19.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
repro-rights-amicus-briefs/legal-bert-base-uncased-finetuned-RRamicus | repro-rights-amicus-briefs | 2022-03-03T20:21:45Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: legal-bert-base-uncased-finetuned-RRamicus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-bert-base-uncased-finetuned-RRamicus
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 928
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.021 | 1.0 | 1118 | 1.3393 |
| 1.2272 | 2.0 | 2236 | 1.2612 |
| 1.2467 | 3.0 | 3354 | 1.2403 |
| 1.2149 | 4.0 | 4472 | 1.2276 |
| 1.1855 | 5.0 | 5590 | 1.2101 |
| 1.1674 | 6.0 | 6708 | 1.2020 |
| 1.1508 | 7.0 | 7826 | 1.1893 |
| 1.1386 | 8.0 | 8944 | 1.1870 |
| 1.129 | 9.0 | 10062 | 1.1794 |
| 1.1193 | 10.0 | 11180 | 1.1759 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
mcdzwil/bert-base-NER-finetuned-ner-ISU | mcdzwil | 2022-03-03T20:21:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-03T20:12:34Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-NER-finetuned-ner-ISU
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-NER-finetuned-ner-ISU
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1090
- Precision: 0.9408
- Recall: 0.8223
- F1: 0.8776
- Accuracy: 0.9644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.1411 | 0.8970 | 0.7840 | 0.8367 | 0.9473 |
| No log | 2.0 | 96 | 0.1231 | 0.9453 | 0.7964 | 0.8645 | 0.9589 |
| No log | 3.0 | 144 | 0.1090 | 0.9408 | 0.8223 | 0.8776 | 0.9644 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
batterydata/bert-base-uncased-squad-v1 | batterydata | 2022-03-03T19:53:31Z | 69 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: en
tags: question answering
license: apache-2.0
datasets:
- squad
- batterydata/battery-device-data-qa
metrics: squad
---
# BERT-base-cased for QA
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "bert-base-uncased"
max_seq_len = 386
learning_rate = 3e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 80.93,
"f1": 88.20,
```
Evaluated on the battery device dataset.
```
"precision": 62.19,
"recall": 75.00,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
kaixinwang/NLP | kaixinwang | 2022-03-03T19:06:29Z | 6 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"sentiment analysis",
"STEM",
"text classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
language:
- "Python"
thumbnail: "url to a thumbnail used in social sharing"
tags:
- "sentiment analysis"
- "STEM"
- "text classification"
---
Welcome! This is the model built for the sentiment analysis on the STEM course reviews at UCLA.
- Author: Kaixin Wang
- Email: [email protected]
- Time Updated: March 2022 |
Kevincp560/t5-small-finetuned-pubmed | Kevincp560 | 2022-03-03T17:22:09Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:pub_med_summarization_dataset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-03T16:24:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: t5-small-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 8.8295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-pubmed
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2635
- Rouge1: 8.8295
- Rouge2: 3.2594
- Rougel: 7.9975
- Rougelsum: 8.4483
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.5892 | 1.0 | 4000 | 2.3616 | 10.1169 | 3.9666 | 8.8854 | 9.5836 | 19.0 |
| 2.559 | 2.0 | 8000 | 2.3045 | 9.4321 | 3.5398 | 8.424 | 8.984 | 19.0 |
| 2.5029 | 3.0 | 12000 | 2.2820 | 9.1658 | 3.3686 | 8.2222 | 8.7311 | 19.0 |
| 2.4673 | 4.0 | 16000 | 2.2692 | 8.8973 | 3.2617 | 8.0395 | 8.5046 | 19.0 |
| 2.4331 | 5.0 | 20000 | 2.2635 | 8.8295 | 3.2594 | 7.9975 | 8.4483 | 19.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
nateraw/keras-dummy-model-mixin-demo-w-card | nateraw | 2022-03-03T15:55:09Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
Ayham/ernie_bert_summarization_cnn_dailymail | Ayham | 2022-03-03T15:38:50Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-03T08:14:16Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: ernie_bert_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ernie_bert_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_wav2vec2 | espnet | 2022-03-03T15:32:39Z | 2 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:iemocap",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| automatic-speech-recognition | 2022-03-03T15:29:59Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- iemocap
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_wav2vec2`
This model was trained by YushiUeda using iemocap recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout cf73065ba66cf6efb94af4415f0facaaef86abf6
pip install -e .
cd egs2/iemocap/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_wav2vec2
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Mar 3 00:09:55 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `cf73065ba66cf6efb94af4415f0facaaef86abf6`
- Commit date: `Sun Feb 27 19:56:48 2022 -0500`
## Using Conformer based encoder, Transformer based decoder, and self-supervised learning features with spectral augmentation and predicting transcript along with sentiment
- ASR config: [conf/tuning/train_asr_conformer_wav2vec2.yaml](conf/tuning/train_asr_conformer_wav2vec2.yaml)
- token_type: word
- Sentiment Labels: Positive, Neutral, Negative
|dataset|Snt|Intent Classification Macro F1 (%)| Weighted F1 (%)| Micro F1 (%)|
|---|---|---|---|---|
|decode_asr_model_valid.acc.ave_10best/valid|754|62.4|73.2|74.7|
|decode_asr_model_valid.acc.ave_10best/test|1650|61.1|64.8|66.1|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_wav2vec2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_wav2vec2_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- i
- you
- Negative
- to
- it
- '''s'
- the
- '''t'
- that
- and
- Neutral
- Positive
- a
- know
- what
- of
- like
- we
- don
- just
- is
- do
- this
- '''m'
- me
- have
- can
- in
- for
- 'no'
- so
- not
- '''re'
- my
- but
- mean
- be
- going
- all
- was
- they
- well
- want
- yeah
- right
- get
- 'on'
- there
- he
- oh
- here
- go
- out
- with
- your
- if
- okay
- are
- she
- at
- '''ll'
- '''ve'
- got
- think
- about
- up
- see
- then
- why
- how
- time
- really
- one
- now
- or
- as
- back
- look
- her
- him
- been
- because
- 'yes'
- would
- didn
- little
- did
- good
- some
- them
- something
- need
- maybe
- never
- um
- come
- take
- god
- had
- could
- will
- uh
- am
- people
- thing
- when
- very
- let
- much
- sorry
- from
- again
- long
- give
- anything
- too
- make
- fish
- years
- where
- isn
- three
- said
- things
- nothing
- help
- work
- tell
- guess
- over
- 'off'
- business
- even
- sir
- any
- his
- around
- were
- way
- who
- new
- kind
- '''d'
- our
- everything
- more
- came
- an
- should
- down
- understand
- only
- great
- else
- man
- line
- us
- ask
- last
- doing
- say
- waiting
- other
- lot
- job
- feel
- yourself
- point
- thought
- day
- whole
- away
- coming
- better
- marry
- always
- these
- still
- wrong
- two
- sure
- care
- phone
- probably
- remember
- annie
- life
- year
- believe
- gonna
- supposed
- went
- first
- talk
- listen
- alright
- before
- thinking
- after
- stuff
- happy
- ever
- turn
- thank
- home
- fine
- into
- than
- call
- money
- stay
- actually
- every
- hope
- love
- huh
- married
- wait
- somewhere
- has
- being
- father
- larry
- hell
- wanted
- trying
- getting
- guys
- name
- saying
- bag
- hear
- girl
- hey
- flashlight
- beach
- put
- leave
- dollars
- mind
- augie
- does
- won
- fifty
- excited
- hate
- four
- done
- through
- their
- keep
- car
- lost
- doesn
- happen
- wouldn
- school
- big
- calm
- night
- '''cause'
- id
- another
- though
- myself
- nobody
- somebody
- best
- might
- same
- form
- mom
- nice
- matter
- spot
- stop
- told
- by
- shut
- enough
- five
- joe
- hard
- find
- course
- chris
- drunk
- snap
- luggage
- rather
- standing
- someone
- laugh
- took
- those
- please
- live
- six
- ridiculous
- minute
- looking
- bring
- show
- start
- brought
- days
- must
- pretty
- sort
- talking
- sand
- child
- working
- send
- next
- hundred
- whatever
- many
- moon
- moment
- champagne
- s
- problem
- end
- real
- dear
- happened
- person
- place
- fill
- awesome
- house
- such
- cool
- c
- haven
- knew
- die
- finally
- glasses
- stupid
- least
- dad
- supervisor
- totally
- each
- try
- waited
- idea
- u
- party
- asked
- anymore
- sick
- evening
- license
- kid
- wow
- flight
- felt
- pay
- since
- single
- miss
- without
- different
- mmhmm
- free
- sometimes
- yet
- couldn
- view
- hour
- knows
- drive
- themselves
- swim
- ah
- brandy
- fact
- ma
- '''am'
- already
- part
- sit
- thanks
- comes
- check
- everyone
- started
- kiss
- weren
- hotel
- own
- beast
- bad
- above
- run
- worst
- grunions
- darling
- seem
- baby
- turned
- gone
- shouldn
- exactly
- reason
- full
- both
- crazy
- pack
- bit
- swimming
- liquor
- seemed
- serious
- cause
- peter
- burden
- gosh
- forgot
- happens
- alone
- pass
- letters
- heard
- manager
- hours
- baggage
- card
- number
- argue
- seen
- walk
- forget
- kids
- family
- blanket
- honey
- open
- quite
- gotta
- forms
- mother
- old
- needs
- times
- airline
- which
- once
- service
- week
- together
- twenty
- stand
- made
- fun
- dead
- sake
- men
- kate
- today
- plane
- most
- carla
- driving
- deal
- information
- wanna
- definitely
- while
- yea
- certificate
- particular
- lots
- calling
- fortune
- write
- entire
- found
- trouble
- use
- forever
- woman
- enjoy
- room
- damn
- war
- meaning
- longer
- jacket
- ticket
- twice
- sent
- wonder
- small
- amanda
- cannot
- able
- half
- ha
- saw
- bus
- ago
- hmm
- hi
- kidding
- giving
- gave
- move
- women
- ahead
- york
- guy
- suppose
- company
- incredible
- either
- minutes
- tonight
- shoes
- utterly
- wasn
- filled
- gets
- amazing
- beautiful
- hello
- birth
- prove
- choice
- friend
- expect
- says
- blue
- anywhere
- died
- weird
- umm
- blood
- d
- face
- body
- alive
- diagram
- goes
- read
- far
- race
- wind
- fly
- interested
- california
- coast
- news
- past
- charles
- floor
- idiotic
- indeed
- absolutely
- softball
- answer
- somehow
- having
- campus
- completely
- file
- everybody
- given
- fair
- front
- telling
- tried
- sign
- helping
- dollar
- used
- takes
- hair
- behind
- head
- also
- question
- pull
- brother
- nonsense
- kill
- pocket
- cold
- mine
- watching
- shall
- divorce
- driver
- m
- makes
- cried
- security
- suitcase
- seems
- control
- set
- letter
- realized
- paper
- weeks
- address
- sweet
- lose
- huge
- death
- ones
- living
- glad
- bed
- until
- thinks
- wedding
- pieces
- parents
- ready
- almost
- forgive
- kissed
- silver
- during
- forty
- lives
- grow
- arrive
- eyes
- putting
- quiet
- poor
- presents
- sting
- tired
- row
- anyhow
- window
- v
- thousand
- watch
- ashamed
- figure
- vacation
- application
- left
- certainly
- calls
- months
- student
- close
- helpful
- called
- welcome
- major
- match
- morning
- fit
- reach
- door
- wife
- faith
- noticed
- several
- killed
- accident
- rat
- flop
- hands
- ear
- dancing
- hairs
- bugging
- dinner
- bills
- worked
- bored
- conversation
- tunis
- overbearing
- grand
- nine
- amusing
- vile
- tempered
- obviously
- tomorrow
- taken
- eight
- venice
- worth
- boy
- realize
- midnight
- evil
- sixteen
- gotten
- paying
- bottle
- smart
- cindy
- excuse
- along
- seven
- children
- figured
- jobs
- joke
- charge
- memorial
- sitting
- hardly
- young
- story
- feels
- pronouncing
- insane
- forgotten
- fast
- inspire
- grub
- tough
- arguing
- air
- toss
- instance
- raining
- pair
- dry
- socks
- selfish
- included
- yours
- mystery
- mindedness
- urgency
- pure
- urge
- insulting
- ideas
- herself
- period
- missed
- backwards
- dance
- worms
- pop
- except
- perfect
- blow
- funny
- listening
- sadistic
- bully
- cruel
- 'true'
- second
- acting
- lucky
- handle
- loved
- hit
- shaking
- destroyed
- changed
- book
- eleven
- animals
- ice
- cream
- brings
- frustrating
- otherwise
- onto
- pregnant
- operator
- baltimore
- san
- diego
- contract
- brown
- friends
- pictures
- internet
- piece
- high
- anyone
- tickets
- inconvenience
- gift
- usually
- green
- city
- couple
- chuck
- growing
- pick
- throw
- yay
- walking
- grave
- considerate
- inspired
- looked
- mistake
- believes
- avoid
- sucker
- rock
- strangers
- missing
- hide
- geez
- imagination
- overseas
- command
- earth
- monument
- difference
- zipped
- kansas
- reservations
- ahh
- formed
- barefoot
- shower
- running
- garage
- knickerbocker
- locker
- wasting
- roses
- peaches
- rosy
- mention
- shh
- behave
- exquisitely
- beautifully
- rolling
- biting
- scratching
- panthers
- suddenly
- ought
- dreadfully
- pity
- eye
- world
- making
- bark
- roll
- hoops
- insufferable
- weak
- upstairs
- insist
- boorish
- conceited
- impossible
- torment
- brute
- perfectly
- wicked
- crawling
- top
- wish
- wants
- bank
- plan
- soon
- plenty
- bags
- congratulations
- play
- carry
- ignore
- sudden
- refrigerator
- loot
- fight
- lights
- swallows
- goose
- bumps
- keeps
- fighting
- massive
- celebration
- sex
- human
- ours
- light
- minded
- social
- needed
- anyway
- words
- problems
- claim
- reimburse
- checked
- airport
- meet
- e
- responsibility
- grunion
- knees
- thousands
- important
- shows
- goddamn
- strong
- law
- sara
- brent
- passport
- aren
- month
- romantic
- leaving
- random
- applied
- interesting
- regular
- taking
- harder
- hurt
- movie
- freaking
- record
- airlines
- responsible
- honestly
- grew
- proud
- hang
- mrs
- fellow
- terrible
- contradict
- infuriate
- throws
- afraid
- suffer
- bloody
- settled
- thrash
- may
- son
- faithful
- moments
- act
- sleep
- detroit
- planning
- yard
- particularly
- natural
- phenomenon
- highlight
- flopping
- laying
- eggs
- mating
- orgy
- magic
- unexplainable
- instincts
- seaweed
- instinctual
- firecracker
- spent
- clasped
- intimate
- special
- wishes
- seriously
- refreshments
- ooh
- pinpoint
- marge
- dishes
- fat
- ring
- later
- shivers
- spine
- sillier
- poise
- trumpets
- squeakers
- sockets
- allure
- contrary
- violently
- glass
- temperamental
- fiend
- loathe
- adder
- riotous
- mentioned
- intemperate
- tots
- downstairs
- mad
- loose
- lived
- yelling
- happening
- promise
- known
- exciting
- finish
- college
- atlanta
- searching
- fired
- drinking
- jesus
- lock
- plans
- hole
- santa
- kitchen
- invite
- believing
- ann
- landing
- eats
- panties
- sore
- throat
- unmistakable
- capistrano
- lemmings
- cliffs
- invitation
- map
- heaven
- carpet
- poodle
- suicide
- pact
- turns
- court
- dies
- mustn
- vampire
- identification
- places
- danger
- hand
- middle
- situation
- option
- willing
- paid
- horrible
- pain
- anybody
- paperwork
- difficult
- dream
- sakes
- matters
- toes
- become
- habit
- hold
- survive
- break
- babe
- shit
- contact
- land
- water
- transfer
- backersen
- desk
- wallet
- stolen
- credit
- cards
- clearly
- appreciate
- complicated
- uhuh
- bucks
- win
- theatre
- resume
- riding
- helps
- less
- planes
- means
- future
- ran
- red
- wrote
- loans
- spend
- dreaming
- proof
- shooting
- crack
- cracked
- dares
- invited
- breaks
- embarrassed
- wondering
- aw
- style
- granted
- embarrassing
- mixed
- su
- spawning
- stubbed
- toe
- bodies
- expectantly
- meant
- beginning
- traumatized
- freda
- sooner
- applies
- philosophers
- rots
- trivial
- torture
- stiff
- venom
- fangs
- wake
- bended
- voice
- build
- unbelievable
- hiring
- resumes
- eventually
- aggressive
- awhile
- especially
- further
- mass
- pointless
- claus
- neither
- mmm
- cannes
- figures
- burnt
- debate
- exception
- busy
- safe
- possible
- spring
- starting
- buy
- rest
- office
- complaint
- accepted
- ten
- area
- seats
- foam
- vibrations
- drives
- popped
- slightly
- exaggerated
- scientific
- proposed
- bathroom
- awful
- scene
- adders
- afford
- packet
- forward
- customer
- brand
- yellow
- fifteen
- brian
- asking
- percent
- girlfriend
- acceptance
- patient
- patience
- dishonest
- cheese
- restaurant
- t
- sixty
- direct
- holiday
- inn
- refund
- hmmm
- receiving
- sim
- browns
- unacceptable
- northwest
- dorky
- putt
- change
- filling
- z
- x
- simple
- mail
- request
- raise
- town
- hadn
- played
- pennies
- visa
- visit
- loves
- list
- environment
- frustrated
- ride
- imagine
- flew
- nash
- replace
- paris
- personal
- issue
- flights
- track
- angry
- headstone
- cemetery
- cancer
- poetry
- palm
- l
- dropped
- bunch
- p
- chair
- broke
- o
- allow
- nights
- talent
- ignoring
- center
- lovely
- sneaking
- whose
- es
- naturally
- stays
- wide
- bought
- arm
- exact
- curtsy
- wiggle
- superficial
- paint
- naked
- vendome
- rouser
- younger
- jealous
- fascinating
- duty
- photographer
- studio
- cad
- restraint
- ill
- knee
- applying
- questions
- picture
- fake
- apartment
- cash
- drink
- upset
- sending
- flying
- speak
- details
- wherever
- unfortunate
- education
- leaves
- basically
- hospital
- messed
- sounds
- pinch
- malibu
- drop
- team
- professional
- till
- ambiguous
- seeing
- ugh
- wet
- heading
- release
- fire
- inside
- pr
- includes
- rub
- ludicrous
- wriggle
- flippancy
- acid
- sweetness
- curling
- dressing
- gown
- broach
- enjoyable
- original
- '''em'
- early
- ok
- daughter
- age
- steps
- rejected
- starts
- competitive
- hired
- worse
- itself
- nowhere
- unfortunately
- process
- fault
- decision
- package
- easy
- transferred
- straight
- suckers
- none
- returning
- throwing
- cork
- softest
- breathe
- road
- catch
- threw
- canal
- comb
- towels
- sacred
- savor
- delight
- needn
- late
- web
- website
- rough
- daddy
- talked
- feeling
- talented
- interview
- food
- looks
- misplaced
- theft
- likely
- stuck
- tags
- cult
- everywhere
- menu
- choose
- press
- lady
- bill
- department
- online
- immediately
- miles
- notice
- vote
- heavens
- yell
- anna
- tables
- hasn
- stole
- losing
- unfair
- positive
- boston
- celebrate
- system
- turning
- newspapers
- pays
- dare
- jokes
- swine
- demand
- building
- finished
- staying
- cheap
- anyways
- okey
- lobster
- wonderful
- harvard
- engineering
- summer
- lawyer
- mr
- lax
- delta
- funeral
- report
- property
- whoever
- corporate
- miso
- soup
- holy
- olivia
- camera
- power
- sold
- testing
- greens
- explain
- agreement
- undecided
- access
- babies
- street
- vegas
- slot
- honeymoon
- husband
- penny
- slots
- wheel
- cat
- citizenship
- england
- fan
- spending
- craig
- services
- monster
- baloney
- saving
- necessarily
- carousel
- cameras
- airplane
- sentimental
- value
- incredibly
- shopping
- jet
- clothes
- apologize
- allowed
- amount
- candy
- redlands
- sprinklers
- whenever
- brain
- park
- holding
- memorized
- surgery
- audience
- joy
- scholarships
- commuting
- h
- ruined
- mm
- bet
- neighborhood
- sticking
- woo
- teach
- class
- confused
- clock
- foolish
- ocean
- distinctly
- whispered
- wishing
- white
- elliott
- strange
- quest
- ultimate
- truth
- shan
- word
- disagreeable
- wench
- birthday
- national
- thin
- rent
- colors
- citizen
- account
- '''til'
- hire
- short
- fuse
- america
- audition
- sponge
- language
- arriving
- reimbursement
- computer
- cover
- ass
- dealing
- quick
- freaks
- pitch
- hitting
- housing
- force
- scholarship
- dirty
- depends
- helicopter
- wild
- sport
- games
- streets
- although
- mi
- trust
- cracker
- curtsey
- bicker
- irons
- besides
- splendid
- born
- weekends
- letting
- tear
- apart
- touch
- flipped
- hot
- outside
- flowers
- candles
- approve
- surprised
- lead
- ends
- worthless
- apparently
- worker
- annoy
- belongings
- disappeared
- under
- case
- checking
- admit
- risk
- agreed
- yesterday
- country
- financial
- aid
- within
- automated
- systems
- specific
- rate
- star
- aisle
- afternoon
- maui
- machine
- waste
- available
- confirmed
- thinkin
- liked
- kicked
- intermittently
- burned
- desire
- fade
- passion
- laughable
- cunning
- mirrors
- painted
- wooden
- snake
- suspicious
- nosey
- silly
- wonders
- order
- standard
- site
- sense
- dangerous
- cute
- whether
- considering
- opinion
- f
- few
- guarantee
- possessions
- claims
- sue
- easier
- cared
- expected
- trip
- europe
- its
- circles
- large
- store
- macy
- rotary
- instead
- showed
- hundreds
- planned
- someplace
- sensitive
- popping
- opened
- backrub
- fantasy
- damned
- sheet
- cut
- purchase
- amy
- quit
- clapping
- onstage
- eighteen
- auditioning
- rejection
- prepared
- thirty
- master
- kelly
- natalie
- pants
- isabella
- verizon
- goodbye
- fucking
- challenge
- slept
- created
- checkbook
- argument
- uhh
- perhaps
- loath
- complete
- sad
- priorities
- between
- moving
- song
- temporary
- pulling
- smith
- receptionist
- extra
- lodging
- eh
- la
- cost
- boss
- peanuts
- doctor
- production
- downtown
- april
- contracts
- incompetent
- realtor
- fix
- payphone
- verify
- electrical
- outage
- symptoms
- nature
- pilot
- hook
- realizes
- bother
- trade
- event
- meadow
- faint
- blues
- bananas
- overnight
- station
- attention
- purchasing
- terms
- taser
- excellent
- counsel
- sorority
- golfing
- library
- dork
- taco
- branch
- separate
- sacrifices
- mothers
- kicking
- videotape
- stream
- sitters
- moved
- computers
- machines
- bride
- cruise
- likes
- tabs
- plays
- giant
- renamed
- brenda
- lumber
- janet
- state
- quarters
- costs
- escort
- reliable
- board
- posting
- trail
- following
- fantastic
- mighty
- recommending
- generally
- outline
- affords
- save
- carpool
- frustration
- refuse
- anger
- fourth
- lines
- fourteen
- mileage
- candid
- packed
- replaced
- expensive
- lawsuit
- cruising
- bruising
- president
- mistakenly
- behalf
- listed
- liable
- held
- sean
- badge
- employee
- impression
- cemeteries
- urban
- oasis
- wandering
- hers
- pathetic
- ground
- stones
- tumors
- heather
- built
- prospect
- garden
- section
- parties
- feet
- poems
- curly
- tree
- crown
- john
- dunn
- begin
- wheelchair
- reciting
- envelope
- grants
- mold
- minds
- mess
- rapper
- ho
- masters
- teacher
- dash
- popular
- seasoning
- messing
- ruin
- woke
- darkest
- beating
- bush
- porch
- fresh
- rooms
- sweetest
- pets
- cheeked
- brooch
- however
- jones
- voices
- berating
- christmas
- shame
- bunker
- guard
- spread
- companies
- shipping
- shock
- group
- dual
- unattached
- engagement
- sock
- dude
- lucked
- blush
- beige
- loaded
- craziest
- offered
- spoke
- english
- accent
- illegal
- jail
- caught
- hardcore
- tropical
- bahamas
- tahiti
- wealthy
- royalty
- removed
- attitude
- extremely
- hostile
- cutting
- sentence
- jumping
- produce
- field
- shake
- across
- soaked
- dying
- georgia
- educated
- boarding
- attendance
- seat
- offer
- publicize
- abuse
- insinuating
- smug
- mouth
- tossing
- hanky
- black
- wheels
- easily
- overhead
- compartment
- data
- collecting
- lip
- coffee
- smoking
- cigarettes
- union
- differently
- numb
- sickness
- boom
- mortality
- affecting
- slow
- books
- per
- diem
- victorian
- houses
- west
- sider
- commute
- practice
- neon
- softballs
- glow
- co
- ed
- nationally
- ranked
- ping
- pong
- denigrate
- rookie
- donuts
- recently
- pitcher
- hitter
- mostly
- shortstop
- ex
- trojans
- sports
- nicer
- monica
- player
- type
- helipad
- fell
- literally
- doubt
- cares
- mustache
- papers
- crying
- floorboards
- sorted
- everyday
- seas
- bringing
- sacrifice
- guilty
- opening
- return
- jumped
- distinctively
- direction
- tiny
- action
- passed
- cheeks
- darn
- urgh
- restrain
- self
- centered
- registration
- lunch
- documents
- identifications
- deadline
- carries
- official
- documentation
- government
- wireless
- crucial
- pulls
- kinda
- girly
- radiant
- ya
- shine
- invitations
- response
- mcdonald
- level
- member
- pavement
- indicators
- prejudice
- against
- applications
- hating
- physically
- amateur
- crawl
- dumber
- cases
- etiquette
- bug
- opinions
- magically
- irresponsible
- carrousel
- contents
- main
- liability
- provides
- shops
- reimbursed
- investigate
- provide
- uncommon
- johnny
- conscious
- stories
- africa
- image
- hurts
- goout
- gradual
- impact
- subside
- heals
- parts
- football
- recognizable
- accomplished
- prestige
- load
- worrying
- decide
- tour
- friendly
- ivy
- walls
- collegiate
- g
- choices
- math
- prestigious
- departments
- orientation
- graduate
- shiloh
- valued
- customers
- previous
- purchases
- scheduling
- highly
- discounted
- uses
- corporation
- hotels
- rated
- aisles
- switch
- fortunately
- allows
- spare
- shuttle
- appropriate
- traveling
- deals
- shuttles
- sleeps
- gee
- futile
- moralists
- unbearable
- flippant
- shibboleths
- rush
- madly
- piazza
- iron
- dri
- counter
- applica
- lonely
- disappear
- video
- definitive
- magazine
- boyfriend
- stage
- golly
- concert
- crew
- freak
- guaranteed
- nervous
- hah
- persistence
- factors
- types
- male
- female
- consideration
- cooking
- reconsidering
- uhm
- retirement
- foot
- persistent
- table
- skewed
- painting
- outer
- employment
- unlucky
- planet
- normal
- peoples
- reading
- difficulties
- loading
- mishap
- cart
- shipped
- tracking
- reim
- tight
- error
- continue
- 'false'
- compensate
- policy
- gifts
- nobodies
- tag
- originally
- shoe
- core
- memories
- kathy
- lasted
- gary
- closed
- surreal
- troops
- loving
- los
- angeles
- schools
- kinds
- secrets
- explore
- rip
- nuts
- champions
- leaning
- towards
- communications
- broad
- confined
- ropes
- recording
- depending
- leads
- bypass
- zero
- pleasant
- ebay
- bye
- steve
- hint
- asks
- tone
- pretend
- protection
- rid
- submit
- print
- regarding
- grievance
- sites
- protected
- processed
- careful
- secure
- unreliable
- trash
- kept
- spotting
- certain
- specifically
- pushing
- headed
- ears
- watched
- sends
- ceaseless
- wear
- often
- pleasure
- sonya
- promoted
- nurses
- mommy
- va
- videotaped
- cousin
- postpone
- performance
- swear
- cast
- spotlight
- microphone
- tripped
- surprise
- scored
- points
- members
- loser
- marrying
- weddings
- carats
- lousy
- chaperone
- drowsy
- deserve
- cry
- tears
- happiness
- marriage
- commercials
- refection
- financially
- studied
- passing
- russel
- crowe
- pooling
- funds
- owe
- learning
- role
- auditions
- denny
- tip
- teaching
- oof
- france
- steal
- keys
- laughing
- rosenkrantz
- thingy
- bopper
- limit
- whoa
- ways
- suffered
- disease
- handsome
- gifted
- parent
- ripped
- uveny
- tricia
- chemo
- baseball
- benny
- nat
- nation
- bread
- eat
- beer
- dorm
- sometime
- mattresses
- reserved
- grauman
- scale
- whooooo
- acti
- film
- art
- academy
- films
- fuck
- ethiopia
- cuddle
- profanity
- provider
- satellites
- average
- compensating
- unbeknownst
- satellite
- exaggerate
- advising
- addressed
- fax
- dumb
- fritz
- incoming
- million
- grown
- fella
- shootin
- travel
- sat
- instinct
- goosebumps
- arms
- danced
- intimately
- spart
- strumpets
- bristling
- diamonds
- taste
- portion
- side
- stairs
- condescending
- copy
- proceed
- remove
- missy
- behaving
- sweetie
- deploy
- specialist
- increase
- triple
- promotion
- retire
- quiets
- faster
- career
- lame
- drew
- barrymore
- nasty
- mouse
- cheesy
- jane
- tarzan
- engaged
- esmeralda
- hitched
- spontaneous
- character
- conga
- dim
- pulled
- chucky
- sarah
- guiding
- graduated
- apply
- colleges
- energy
- busing
- clerk
- excuses
- qualified
- chang
- investment
- banking
- deloitte
- touche
- temp
- degrading
- smarter
- astronaut
- biomedical
- internship
- plus
- breaking
- evicting
- typing
- shoot
- degree
- science
- club
- joking
- doomed
- maryland
- cooperate
- emergency
- pounds
- urn
- deduction
- sherlock
- holmes
- vessel
- burst
- caption
- therefore
- placed
- firing
- lobby
- fastest
- ibm
- misplace
- count
- hanging
- explanation
- follow
- footsteps
- overboard
- paralyzed
- coma
- fucked
- studying
- countries
- goal
- met
- greatest
- hopefully
- mmmm
- cinema
- chapter
- professionals
- sipping
- martinis
- sushi
- vat
- assistance
- starve
- south
- central
- firm
- police
- officer
- viacom
- digits
- speaking
- network
- charging
- connect
- outages
- hurricane
- katrina
- chose
- maam
- proven
- failing
- receive
- cuts
- using
- flip
- writing
- ms
- fall
- older
- game
- orange
- pink
- goodies
- battling
- sees
- flat
- stronger
- acted
- deserves
- hats
- shore
- pokes
- nah
- paul
- boats
- dammit
- enjoys
- bound
- harm
- pleasured
- lure
- devil
- rile
- topic
- initialed
- lets
- correctly
- spelled
- signed
- shitty
- timing
- susie
- tours
- emotionally
- bullshit
- enlist
- lie
- traditional
- church
- cabins
- flowery
- naturey
- midsummer
- excitement
- hoping
- attacked
- bears
- trim
- cooler
- dog
- tanish
- contrast
- cake
- buffet
- fried
- chicken
- mashed
- potatoes
- happier
- thrilled
- ecstatic
- rushed
- pressure
- interviews
- favors
- bite
- excessive
- unemployed
- cab
- gas
- possibly
- extreme
- trained
- presentable
- quote
- buck
- chugging
- engine
- realm
- minimum
- wage
- fry
- flipper
- bottom
- clear
- affect
- cle
- dressed
- shave
- legs
- presentation
- eighty
- success
- position
- training
- mcdonalds
- tv
- rainbow
- colored
- crap
- safely
- destination
- percoes
- equivalent
- amends
- courtesy
- inconveniencing
- near
- communicate
- conditions
- frequently
- current
- expecting
- pissed
- honor
- grandmother
- condition
- inevitable
- peace
- general
- mace
- present
- knife
- puny
- underwater
- basket
- weaving
- lying
- decided
- works
- worried
- occasion
- cruisers
- vibe
- greek
- lessons
- suck
- celebrating
- crush
- throughout
- test
- waters
- movies
- vermont
- cruiser
- abused
- frat
- boys
- dorms
- dell
- requests
- fixed
- dealt
- worries
- refunded
- situa
- relevant
- ordered
- orders
- others
- incorrectly
- tomatoes
- del
- cents
- attached
- cuz
- hoped
- opportunity
- rushing
- goods
- skipped
- breath
- kleenex
- alaska
- bearing
- hated
- holes
- calf
- witch
- whore
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
sanchit-gandhi/wav2vec2-2-rnd-grid-search | sanchit-gandhi | 2022-03-03T14:51:05Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9475
- Wer: 2.0097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.9006 | 1.68 | 1500 | 6.9507 | 2.0097 |
| 6.9503 | 3.36 | 3000 | 6.9475 | 2.0097 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
amtam0/timer-ner-fr | amtam0 | 2022-03-03T14:12:18Z | 10 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: fr
widget:
- text: 'génère 27 séries de 54 seconde '
- text: ' 9 cycles de 17 minute '
- text: 'initie 17 sets de 44 secondes 297 minutes entre séries'
- text: ' 13 sets de 88 secondes 225 minutes 49 entre chaque série'
- text: 'génère 39 séries de 19 minute 21 minute 45 entre séries'
- text: 'débute 47 sets de 6 heures '
- text: 'débute 1 cycle de 25 minutes 48 23 minute 32 entre chaque série'
- text: 'commence 23 séries de 18 heure et demi 25 minutes 41 entre séries'
- text: ' 13 cycles de 52 secondes '
- text: 'crée 31 série de 60 secondes '
- text: ' 7 set de 36 secondes 139 minutes 34 entre séries'
- text: 'commence 37 sets de 51 minute 25 295 minute entre chaque série'
- text: 'crée 11 cycles de 72 seconde 169 minute 15 entre chaque série'
- text: 'initie 5 série de 33 minutes 48 '
- text: 'crée 23 set de 1 minute 46 279 minutes 50 entre chaque série'
- text: 'génère 41 série de 35 minutes 55 '
- text: 'lance 11 cycles de 4 heures '
- text: 'crée 47 cycle de 28 heure moins quart 243 minutes 45 entre chaque série'
- text: 'initie 23 set de 36 secondes '
- text: 'commence 37 sets de 24 heures et quart '
---
#### This model is used in the [Speech Interval Timer app](https://medium.com/@amtam0/speech-interval-timer-app-using-transformers-1df8fa3821d5)
7-class NER French model using [Flair TransformerWordEmbeddings - camembert-base](https://github.com/flairNLP/flair/).
| **tag** | **meaning** |
|---------------------------------|-----------|
| nb_rounds | Number of rounds |
| duration_br_sd | Duration btwn rounds in seconds |
| duration_br_min | Duration btwn rounds in minutes |
| duration_br_hr | Duration btwn rounds in hours |
| duration_wt_sd | workout duration in seconds |
| duration_wt_min | workout duration in minutes |
| duration_wt_hr | workout duration in hours |
---
Synthetic dataset has been used (perfectible). Sentences example in the widget. |
sanchit-gandhi/wav2vec2-gpt2-wandb-grid-search | sanchit-gandhi | 2022-03-03T13:39:57Z | 40 | 0 | transformers | [
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jiobiala24/wav2vec2-base-1 | jiobiala24 | 2022-03-03T10:47:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:56:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9254
- Wer: 0.3216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.6597 | 2.2 | 1000 | 0.8904 | 0.5388 |
| 0.4751 | 4.41 | 2000 | 0.7009 | 0.3976 |
| 0.3307 | 6.61 | 3000 | 0.7068 | 0.3672 |
| 0.2574 | 8.81 | 4000 | 0.7320 | 0.3544 |
| 0.2096 | 11.01 | 5000 | 0.7803 | 0.3418 |
| 0.177 | 13.22 | 6000 | 0.7768 | 0.3423 |
| 0.1521 | 15.42 | 7000 | 0.8113 | 0.3375 |
| 0.1338 | 17.62 | 8000 | 0.8153 | 0.3325 |
| 0.1168 | 19.82 | 9000 | 0.8851 | 0.3306 |
| 0.104 | 22.03 | 10000 | 0.8811 | 0.3277 |
| 0.0916 | 24.23 | 11000 | 0.8722 | 0.3254 |
| 0.083 | 26.43 | 12000 | 0.9527 | 0.3265 |
| 0.0766 | 28.63 | 13000 | 0.9254 | 0.3216 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Johnson-Lsx/Shaoxiong_Lin_dns_ins20_enh_enh_train_enh_dccrn_raw | Johnson-Lsx | 2022-03-03T10:43:01Z | 0 | 0 | espnet | [
"espnet",
"audio",
"audio-to-audio",
"en",
"dataset:dns_ins20",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| audio-to-audio | 2022-03-03T08:35:14Z | ---
tags:
- espnet
- audio
- audio-to-audio
language: en
datasets:
- dns_ins20
license: cc-by-4.0
---
## ESPnet2 ENH model
### `Johnson-Lsx/Shaoxiong_Lin_dns_ins20_enh_enh_train_enh_dccrn_raw`
This model was trained by Shaoxiong Lin using dns_ins20 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 4538462eb7dc6a6b858adcbd3a526fb8173d6f73
pip install -e .
cd egs2/dns_ins20/enh1
./run.sh --skip_data_prep false --skip_train true --download_model Johnson-Lsx/Shaoxiong_Lin_dns_ins20_enh_enh_train_enh_dccrn_raw
```
<!-- Generated by ./scripts/utils/show_enh_score.sh -->
# RESULTS
## Environments
- date: `Thu Feb 10 23:11:40 CST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.9.1`
- Git hash: `6f66283b9eed7b0d5e5643feb18d8f60118a4afc`
- Commit date: `Mon Dec 13 15:30:29 2021 +0800`
## enh_train_enh_dccrn_batch_size_raw
config: ./conf/tuning/train_enh_dccrn_batch_size.yaml
|dataset|STOI|SAR|SDR|SIR|
|---|---|---|---|---|
|enhanced_cv_synthetic|0.98|24.69|24.69|0.00|
|enhanced_tt_synthetic_no_reverb|0.96|17.69|17.69|0.00|
|enhanced_tt_synthetic_with_reverb|0.81|10.45|10.45|0.00|
## ENH config
<details><summary>expand</summary>
```
config: ./conf/tuning/train_enh_dccrn_batch_size.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_dccrn_batch_size_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 46366
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 32
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
- exp/enh_stats_16k/train/noise_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
- exp/enh_stats_16k/valid/noise_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 64000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_synthetic/wav.scp
- speech_mix
- sound
- - dump/raw/tr_synthetic/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr_synthetic/noise1.scp
- noise_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/cv_synthetic/wav.scp
- speech_mix
- sound
- - dump/raw/cv_synthetic/spk1.scp
- speech_ref1
- sound
- - dump/raw/cv_synthetic/noise1.scp
- noise_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 1.0e-07
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.7
patience: 1
init: null
model_conf:
loss_type: si_snr
criterions:
# The first criterion
- name: si_snr
conf:
eps: 1.0e-7
# the wrapper for the current criterion
# for single-talker case, we simplely use fixed_order wrapper
wrapper: fixed_order
wrapper_conf:
weight: 1.0
use_preprocessor: false
encoder: stft
encoder_conf:
n_fft: 512
win_length: 400
hop_length: 100
separator: dccrn
separator_conf: {}
decoder: stft
decoder_conf:
n_fft: 512
win_length: 400
hop_length: 100
required:
- output_dir
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{ESPnet-SE,
author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
pages = {785--792},
publisher = {{IEEE}},
year = {2021},
url = {https://doi.org/10.1109/SLT48900.2021.9383615},
doi = {10.1109/SLT48900.2021.9383615},
timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
prk/roberta-base-squad2-finetuned-squad | prk | 2022-03-03T10:26:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned-squad
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on a custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 0.1894 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
cammy/bart-large-cnn-finetuned-new-100-pad-early | cammy | 2022-03-03T10:23:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-03T10:22:53Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-new-100-pad-early
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-new-100-pad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9543
- Rouge1: 21.8858
- Rouge2: 8.1444
- Rougel: 16.5751
- Rougelsum: 19.163
- Gen Len: 66.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 0.8692 | 20.2714 | 6.206 | 16.3362 | 18.7117 | 66.4 |
| No log | 2.0 | 200 | 0.9543 | 21.8858 | 8.1444 | 16.5751 | 19.163 | 66.8 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
carolEileen/distilbert-base-uncased-finetuned-imdb | carolEileen | 2022-03-03T09:07:29Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-03T08:55:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5756 | 2.0 | 314 | 2.4230 |
| 2.5395 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
sattaguru/game | sattaguru | 2022-03-03T05:31:06Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-03-03T05:30:04Z | https://sattaking-sattaking.com |
shahp7575/electricidad-base-muchocine-finetuned | shahp7575 | 2022-03-03T05:20:16Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"spanish",
"sentiment",
"es",
"dataset:muchocine",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-03T03:46:13Z | ---
language:
- es
tags:
- spanish
- sentiment
datasets:
- muchocine
widget:
- "Increíble pelicula. ¡Altamente recomendado!"
- "Extremadamente malo. Baja calidad"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-base-muchocine-finetuned
This model fine-tunes [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on [muchocine](https://huggingface.co/datasets/muchocine) dataset for sentiment classification to predict *star_rating*.
### How to use
The model can be used directly with the HuggingFace `pipeline`.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("shahp7575/gpt2-horoscopes")
model = AutoModelWithLMHead.from_pretrained("shahp7575/gpt2-horoscopes")
```
### Examples
```python
from transformers import pipeline
clf = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
clf('Esta película es una joya. Todo fue perfecto: historia, casting, dirección. Me encantó el clímax.')
>>> [{'label': '5', 'score': 0.9658033847808838}]
clf("La historia y el casting fueron geniales.")
>>> [{'label': '4', 'score': 0.6666394472122192}]
clf("Me gustó pero podría ser mejor.")
>>> [{'label': '3', 'score': 0.7013391852378845}]
clf("dinero tirado en esta pelicula")
>>> [{'label': '2', 'score': 0.7564149498939514}]
clf("esta película es una película absolutamente repugnante. odio todo al respecto. gastó tanto dinero.")
>>> [{'label': '1', 'score': 0.3040296733379364}]
```
|
yoavgur/gpt2-bash-history-baseline | yoavgur | 2022-03-02T23:02:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-bash-history-baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-bash-history-baseline
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 158 | 2.1038 |
| No log | 2.0 | 316 | 2.0349 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
edugp/kenlm | edugp | 2022-03-02T22:44:44Z | 0 | 51 | null | [
"kenlm",
"perplexity",
"n-gram",
"kneser-ney",
"bigscience",
"es",
"af",
"ar",
"arz",
"as",
"bn",
"fr",
"sw",
"eu",
"ca",
"zh",
"en",
"hi",
"ur",
"id",
"pt",
"vi",
"gu",
"kn",
"ml",
"mr",
"ta",
"te",
"yo",
"dataset:wikipedia",
"dataset:oscar",
"license:mit",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
language:
- es
- af
- ar
- arz
- as
- bn
- fr
- sw
- eu
- ca
- zh
- en
- hi
- ur
- id
- pt
- vi
- gu
- kn
- ml
- mr
- ta
- te
- yo
tags:
- kenlm
- perplexity
- n-gram
- kneser-ney
- bigscience
license: "mit"
datasets:
- wikipedia
- oscar
---
# KenLM models
This repo contains several KenLM models trained on different tokenized datasets and languages.
KenLM models are probabilistic n-gram languge models that models. One use case of these models consist on fast perplexity estimation for [filtering or sampling large datasets](https://huggingface.co/bertin-project/bertin-roberta-base-spanish). For example, one could use a KenLM model trained on French Wikipedia to run inference on a large dataset and filter out samples that are very unlike to appear on Wikipedia (high perplexity), or very simple non-informative sentences that could appear repeatedly (low perplexity).
At the root of this repo you will find different directories named after the dataset models were trained on (e.g. `wikipedia`, `oscar`). Within each directory, you will find several models trained on different language subsets of the dataset (e.g. `en (English)`, `es (Spanish)`, `fr (French)`). For each language you will find three different files
* `{language}.arpa.bin`: The trained KenLM model binary
* `{language}.sp.model`: The trained SentencePiece model used for tokenization
* `{language}.sp.vocab`: The vocabulary file for the SentencePiece model
The models have been trained using some of the preprocessing steps from [cc_net](https://github.com/facebookresearch/cc_net), in particular replacing numbers with zeros and normalizing punctuation. So, it is important to keep the default values for the parameters: `lower_case`, `remove_accents`, `normalize_numbers` and `punctuation` when using the pre-trained models in order to replicate the same pre-processing steps at inference time.
# Dependencies
* KenLM: `pip install https://github.com/kpu/kenlm/archive/master.zip`
* SentencePiece: `pip install sentencepiece`
# Example:
```
from model import KenlmModel
# Load model trained on English wikipedia
model = KenlmModel.from_pretrained("wikipedia", "en")
# Get perplexity
model.get_perplexity("I am very perplexed")
# 341.3 (low perplexity, since sentence style is formal and with no grammar mistakes)
model.get_perplexity("im hella trippin")
# 46793.5 (high perplexity, since the sentence is colloquial and contains grammar mistakes)
```
In the example above we see that, since Wikipedia is a collection of encyclopedic articles, a KenLM model trained on it will naturally give lower perplexity scores to sentences with formal language and no grammar mistakes than colloquial sentences with grammar mistakes. |
hcy11/distilbert-base-uncased-finetuned-squad | hcy11 | 2022-03-02T20:32:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2672 | 1.0 | 5533 | 1.2131 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
datnth1709/Phobert-classifier | datnth1709 | 2022-03-02T18:29:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"arxiv:2003.00744",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | # <a name="introduction"></a> PhoBERT: Pre-trained language models for Vietnamese
Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
- Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance.
- PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
The general architecture and experimental results of PhoBERT can be found in our EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744):
@article{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
journal = {Findings of EMNLP},
year = {2020}
}
**Please CITE** our paper when PhoBERT is used to help produce published results or is incorporated into other software.
For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
### Installation <a name="install2"></a>
- Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
- Install `transformers`:
- `git clone https://github.com/huggingface/transformers.git`
- `cd transformers`
- `pip3 install --upgrade .`
### Pre-trained models <a name="models2"></a>
Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/phobert-base` | 135M | base | 20GB of texts
`vinai/phobert-large` | 370M | large | 20GB of texts
### Example usage <a name="usage2"></a>
```python
import torch
from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = phobert(input_ids) # Models outputs are now tuples
## With TensorFlow 2.0+:
# from transformers import TFAutoModel
# phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
```
|
mcdzwil/bert-base-NER-finetuned-ner | mcdzwil | 2022-03-02T16:53:52Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-NER-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-NER-finetuned-ner
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1670
- Precision: 0.8358
- Recall: 0.7615
- F1: 0.7969
- Accuracy: 0.9437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.1892 | 0.8240 | 0.7267 | 0.7723 | 0.9341 |
| No log | 2.0 | 96 | 0.1812 | 0.8667 | 0.7458 | 0.8017 | 0.9441 |
| No log | 3.0 | 144 | 0.1670 | 0.8358 | 0.7615 | 0.7969 | 0.9437 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
mcdzwil/distilbert-base-uncased-finetuned-ner | mcdzwil | 2022-03-02T16:35:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1830
- Precision: 0.9171
- Recall: 0.7099
- F1: 0.8003
- Accuracy: 0.9316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.2903 | 0.7952 | 0.7063 | 0.7481 | 0.9136 |
| No log | 2.0 | 96 | 0.2015 | 0.9154 | 0.7075 | 0.7981 | 0.9298 |
| No log | 3.0 | 144 | 0.1830 | 0.9171 | 0.7099 | 0.8003 | 0.9316 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
nlpaueb/bert-base-greek-uncased-v1 | nlpaueb | 2022-03-02T16:32:57Z | 4,038 | 35 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"fill-mask",
"el",
"arxiv:2008.12014",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: el
pipeline_tag: fill-mask
thumbnail: https://github.com/nlpaueb/GreekBERT/raw/master/greek-bert-logo.png
widget:
- text: "Σήμερα είναι μια [MASK] μέρα."
---
# GreekBERT
A Greek version of BERT pre-trained language model.
<img src="https://github.com/nlpaueb/GreekBERT/raw/master/greek-bert-logo.png" width="600"/>
## Pre-training corpora
The pre-training corpora of `bert-base-greek-uncased-v1` include:
* The Greek part of [Wikipedia](https://el.wikipedia.org/wiki/Βικιπαίδεια:Αντίγραφα_της_βάσης_δεδομένων),
* The Greek part of [European Parliament Proceedings Parallel Corpus](https://www.statmt.org/europarl/), and
* The Greek part of [OSCAR](https://traces1.inria.fr/oscar/), a cleansed version of [Common Crawl](https://commoncrawl.org).
Future release will also include:
* The entire corpus of Greek legislation, as published by the [National Publication Office](http://www.et.gr),
* The entire corpus of EU legislation (Greek translation), as published in [Eur-Lex](https://eur-lex.europa.eu/homepage.html?locale=en).
## Pre-training details
* We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert).* We then used [Hugging Face](https://huggingface.co)'s [Transformers](https://github.com/huggingface/transformers) conversion script to convert the TF checkpoint and vocabulary in the desired format in order to be able to load the model in two lines of code for both PyTorch and TF2 users.
* We released a model similar to the English `bert-base-uncased` model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
\* You can still have access to the original TensorFlow checkpoints from this [Google Drive folder](https://drive.google.com/drive/folders/1ZjlaE4nvdtgqXiVBTVHCF5I9Ff8ZmztE?usp=sharing).
## Requirements
We published `bert-base-greek-uncased-v1` as part of [Hugging Face](https://huggingface.co)'s [Transformers](https://github.com/huggingface/transformers) repository. So, you need to install the transformers library through pip along with PyTorch or Tensorflow 2.
```
pip install transformers
pip install (torch|tensorflow)
```
## Pre-process text (Deaccent - Lower)
**NOTICE:** Preprocessing is now natively supported by the default tokenizer. No need to include the following code.
In order to use `bert-base-greek-uncased-v1`, you have to pre-process texts to lowercase letters and remove all Greek diacritics.
```python
import unicodedata
def strip_accents_and_lowercase(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn').lower()
accented_string = "Αυτή είναι η Ελληνική έκδοση του BERT."
unaccented_string = strip_accents_and_lowercase(accented_string)
print(unaccented_string) # αυτη ειναι η ελληνικη εκδοση του bert.
```
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
model = AutoModel.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
```
## Use Pretrained Model as a Language Model
```python
import torch
from transformers import *
# Load model and tokenizer
tokenizer_greek = AutoTokenizer.from_pretrained('nlpaueb/bert-base-greek-uncased-v1')
lm_model_greek = AutoModelWithLMHead.from_pretrained('nlpaueb/bert-base-greek-uncased-v1')
# ================ EXAMPLE 1 ================
text_1 = 'O ποιητής έγραψε ένα [MASK] .'
# EN: 'The poet wrote a [MASK].'
input_ids = tokenizer_greek.encode(text_1)
print(tokenizer_greek.convert_ids_to_tokens(input_ids))
# ['[CLS]', 'o', 'ποιητης', 'εγραψε', 'ενα', '[MASK]', '.', '[SEP]']
outputs = lm_model_greek(torch.tensor([input_ids]))[0]
print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 5].max(0)[1].item()))
# the most plausible prediction for [MASK] is "song"
# ================ EXAMPLE 2 ================
text_2 = 'Είναι ένας [MASK] άνθρωπος.'
# EN: 'He is a [MASK] person.'
input_ids = tokenizer_greek.encode(text_2)
print(tokenizer_greek.convert_ids_to_tokens(input_ids))
# ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', '.', '[SEP]']
outputs = lm_model_greek(torch.tensor([input_ids]))[0]
print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 3].max(0)[1].item()))
# the most plausible prediction for [MASK] is "good"
# ================ EXAMPLE 3 ================
text_3 = 'Είναι ένας [MASK] άνθρωπος και κάνει συχνά [MASK].'
# EN: 'He is a [MASK] person he does frequently [MASK].'
input_ids = tokenizer_greek.encode(text_3)
print(tokenizer_greek.convert_ids_to_tokens(input_ids))
# ['[CLS]', 'ειναι', 'ενας', '[MASK]', 'ανθρωπος', 'και', 'κανει', 'συχνα', '[MASK]', '.', '[SEP]']
outputs = lm_model_greek(torch.tensor([input_ids]))[0]
print(tokenizer_greek.convert_ids_to_tokens(outputs[0, 8].max(0)[1].item()))
# the most plausible prediction for the second [MASK] is "trips"
```
## Evaluation on downstream tasks
For detailed results read the article:
GREEK-BERT: The Greeks visiting Sesame Street. John Koutsikakis, Ilias Chalkidis, Prodromos Malakasiotis and Ion Androutsopoulos. In the Proceedings of the 11th Hellenic Conference on Artificial Intelligence (SETN 2020). Held Online. 2020. (https://arxiv.org/abs/2008.12014)
### Named Entity Recognition with Greek NER dataset
| Model name | Micro F1 |
| ------------------- | ------------------------------------ |
BILSTM-CNN-CRF (Ma and Hovy, 2016) | 76.4 ± 2.07
M-BERT-UNCASED (Devlin et al., 2019) | 81.5 ± 1.77
M-BERT-CASED (Devlin et al., 2019)| 82.1 ± 1.35
XLM-R (Conneau et al., 2020)| 84.8 ± 1.50
GREEK-BERT (ours) | **85.7 ± 1.00**
### Natural Language Inference with XNLI
| Model name | Accuracy |
| ------------------- | ------------------------------------ |
DAM (Parikh et al., 2016) | 68.5 ± 1.71
M-BERT-UNCASED (Devlin et al., 2019) | 73.9 ± 0.64
M-BERT-CASED (Devlin et al., 2019) | 73.5 ± 0.49
XLM-R (Conneau et al., 2020) | 77.3 ± 0.41
GREEK-BERT (ours) | **78.6 ± 0.62**
## Author
The model has been officially released with the article "GREEK-BERT: The Greeks visiting Sesame Street. John Koutsikakis, Ilias Chalkidis, Prodromos Malakasiotis and Ion Androutsopoulos. In the Proceedings of the 11th Hellenic Conference on Artificial Intelligence (SETN 2020). Held Online. 2020" (https://arxiv.org/abs/2008.12014).
If you use the model, please cite the following:
```
@inproceedings{greek-bert,
author = {Koutsikakis, John and Chalkidis, Ilias and Malakasiotis, Prodromos and Androutsopoulos, Ion},
title = {GREEK-BERT: The Greeks Visiting Sesame Street},
year = {2020},
isbn = {9781450388788},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3411408.3411440},
booktitle = {11th Hellenic Conference on Artificial Intelligence},
pages = {110–117},
numpages = {8},
location = {Athens, Greece},
series = {SETN 2020}
}
```
## About Us
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
[Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) | |
lucasmtz/distilbert-base-uncased-finetuned-ner | lucasmtz | 2022-03-02T15:56:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9252181597260577
- name: Recall
type: recall
value: 0.9370175634858485
- name: F1
type: f1
value: 0.9310804802134283
- name: Accuracy
type: accuracy
value: 0.9834146186474335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
- Precision: 0.9252
- Recall: 0.9370
- F1: 0.9311
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.244 | 1.0 | 878 | 0.0714 | 0.9104 | 0.9181 | 0.9142 | 0.9797 |
| 0.0568 | 2.0 | 1756 | 0.0605 | 0.9183 | 0.9351 | 0.9266 | 0.9827 |
| 0.0302 | 3.0 | 2634 | 0.0610 | 0.9252 | 0.9370 | 0.9311 | 0.9834 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
jiobiala24/wav2vec2-base-checkpoint-14 | jiobiala24 | 2022-03-02T15:13:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-14
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-13](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-13) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2822
- Wer: 0.4068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1996 | 1.59 | 1000 | 0.7181 | 0.4079 |
| 0.1543 | 3.17 | 2000 | 0.7735 | 0.4113 |
| 0.1171 | 4.76 | 3000 | 0.8152 | 0.4045 |
| 0.0969 | 6.35 | 4000 | 0.8575 | 0.4142 |
| 0.082 | 7.94 | 5000 | 0.9005 | 0.4124 |
| 0.074 | 9.52 | 6000 | 0.9232 | 0.4151 |
| 0.0653 | 11.11 | 7000 | 0.9680 | 0.4223 |
| 0.0587 | 12.7 | 8000 | 1.0633 | 0.4232 |
| 0.0551 | 14.29 | 9000 | 1.0875 | 0.4171 |
| 0.0498 | 15.87 | 10000 | 1.0281 | 0.4105 |
| 0.0443 | 17.46 | 11000 | 1.2164 | 0.4274 |
| 0.0421 | 19.05 | 12000 | 1.1868 | 0.4191 |
| 0.0366 | 20.63 | 13000 | 1.1678 | 0.4173 |
| 0.0366 | 22.22 | 14000 | 1.2444 | 0.4187 |
| 0.0346 | 23.81 | 15000 | 1.2042 | 0.4169 |
| 0.0316 | 25.4 | 16000 | 1.3019 | 0.4127 |
| 0.0296 | 26.98 | 17000 | 1.2001 | 0.4081 |
| 0.0281 | 28.57 | 18000 | 1.2822 | 0.4068 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jcai1/sentence_similarity_concierge | jcai1 | 2022-03-02T15:04:54Z | 4 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sentence_similarity_concierge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence_similarity_concierge
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1165
- Accuracy: 0.9748
- F1: 0.9680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 402 | 0.2334 | 0.9412 | 0.9263 |
| 0.2834 | 2.0 | 804 | 0.1656 | 0.9608 | 0.9493 |
| 0.1073 | 3.0 | 1206 | 0.1165 | 0.9748 | 0.9680 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
jcai1/ss_mrpc | jcai1 | 2022-03-02T14:32:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ss_mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ss_mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5960
- Accuracy: 0.8799
- F1: 0.9148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.3655 | 0.8578 | 0.8990 |
| 0.524 | 2.0 | 918 | 0.6061 | 0.8260 | 0.8823 |
| 0.2971 | 3.0 | 1377 | 0.5960 | 0.8799 | 0.9148 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
annedirkson/ADR_extraction_patient_forum | annedirkson | 2022-03-02T14:00:09Z | 0 | 0 | null | [
"tf",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ktrain predictor for NER of ADR in patient forum discussions. Created in ktrain 0.29 with transformers 4.10. See requirements.txt to run model. |
spy24/autonlp-US_to_AUS-607117159 | spy24 | 2022-03-02T10:35:42Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:spy24/autonlp-data-US_to_AUS",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-US_to_AUS
co2_eq_emissions: 1.4276876566788055
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 607117159
- CO2 Emissions (in grams): 1.4276876566788055
## Validation Metrics
- Loss: 1.5177973508834839
- Rouge1: 46.134
- Rouge2: 10.578
- RougeL: 45.8856
- RougeLsum: 46.0088
- Gen Len: 3.7283
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-US_to_AUS-607117159
``` |
huggingartists/pink-floyd | huggingartists | 2022-03-02T09:18:41Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/pink-floyd",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/pink-floyd
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/6b5c50912d99c3cf0eabfec5f427c452.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pink Floyd</div>
<a href="https://genius.com/artists/pink-floyd">
<div style="text-align: center; font-size: 14px;">@pink-floyd</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Pink Floyd.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/pink-floyd).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/pink-floyd")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3j9osgks/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Pink Floyd's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1wlqpngf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1wlqpngf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/pink-floyd')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/pink-floyd")
model = AutoModelWithLMHead.from_pretrained("huggingartists/pink-floyd")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
spy24/autonlp-US-to-UK2-606317091 | spy24 | 2022-03-02T09:03:19Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:spy24/autonlp-data-US-to-UK2",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-US-to-UK2
co2_eq_emissions: 1.1913570653422176
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 606317091
- CO2 Emissions (in grams): 1.1913570653422176
## Validation Metrics
- Loss: 1.9264822006225586
- Rouge1: 44.2035
- Rouge2: 6.134
- RougeL: 43.9114
- RougeLsum: 44.0231
- Gen Len: 3.6134
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-US-to-UK2-606317091
``` |
Akash7897/distilbert-base-uncased-finetuned-cola | Akash7897 | 2022-03-02T08:29:47Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.522211073949747
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0789
- Matthews Correlation: 0.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1472 | 1.0 | 535 | 0.8407 | 0.4915 |
| 0.1365 | 2.0 | 1070 | 0.9236 | 0.4990 |
| 0.1194 | 3.0 | 1605 | 0.8753 | 0.4953 |
| 0.1313 | 4.0 | 2140 | 0.9684 | 0.5013 |
| 0.0895 | 5.0 | 2675 | 1.0789 | 0.5222 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Theivaprakasham/layoutlmv2-finetuned-sroie | Theivaprakasham | 2022-03-02T08:12:26Z | 21 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:sroie",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- sroie
model-index:
- name: layoutlmv2-finetuned-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-sroie
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0291
- Address Precision: 0.9341
- Address Recall: 0.9395
- Address F1: 0.9368
- Address Number: 347
- Company Precision: 0.9570
- Company Recall: 0.9625
- Company F1: 0.9598
- Company Number: 347
- Date Precision: 0.9885
- Date Recall: 0.9885
- Date F1: 0.9885
- Date Number: 347
- Total Precision: 0.9253
- Total Recall: 0.9280
- Total F1: 0.9266
- Total Number: 347
- Overall Precision: 0.9512
- Overall Recall: 0.9546
- Overall F1: 0.9529
- Overall Accuracy: 0.9961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Address Precision | Address Recall | Address F1 | Address Number | Company Precision | Company Recall | Company F1 | Company Number | Date Precision | Date Recall | Date F1 | Date Number | Total Precision | Total Recall | Total F1 | Total Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------:|:--------------:|:----------:|:--------------:|:--------------:|:-----------:|:-------:|:-----------:|:---------------:|:------------:|:--------:|:------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| No log | 0.05 | 157 | 0.8162 | 0.3670 | 0.7233 | 0.4869 | 347 | 0.0617 | 0.0144 | 0.0234 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.3346 | 0.1844 | 0.2378 | 0.9342 |
| No log | 1.05 | 314 | 0.3490 | 0.8564 | 0.8934 | 0.8745 | 347 | 0.8610 | 0.9280 | 0.8932 | 347 | 0.7297 | 0.8559 | 0.7878 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.8128 | 0.6693 | 0.7341 | 0.9826 |
| No log | 2.05 | 471 | 0.1845 | 0.7970 | 0.9049 | 0.8475 | 347 | 0.9211 | 0.9424 | 0.9316 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.8978 | 0.7089 | 0.7923 | 0.9835 |
| 0.7027 | 3.05 | 628 | 0.1194 | 0.9040 | 0.9222 | 0.9130 | 347 | 0.8880 | 0.9135 | 0.9006 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.9263 | 0.7061 | 0.8013 | 0.9853 |
| 0.7027 | 4.05 | 785 | 0.0762 | 0.9397 | 0.9424 | 0.9410 | 347 | 0.8889 | 0.9222 | 0.9052 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.7740 | 0.9078 | 0.8355 | 347 | 0.8926 | 0.9402 | 0.9158 | 0.9928 |
| 0.7027 | 5.05 | 942 | 0.0564 | 0.9282 | 0.9308 | 0.9295 | 347 | 0.9296 | 0.9510 | 0.9402 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.7801 | 0.8588 | 0.8176 | 347 | 0.9036 | 0.9323 | 0.9177 | 0.9946 |
| 0.0935 | 6.05 | 1099 | 0.0548 | 0.9222 | 0.9222 | 0.9222 | 347 | 0.6975 | 0.7378 | 0.7171 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.8608 | 0.8732 | 0.8670 | 347 | 0.8648 | 0.8804 | 0.8725 | 0.9921 |
| 0.0935 | 7.05 | 1256 | 0.0410 | 0.92 | 0.9280 | 0.9240 | 347 | 0.9486 | 0.9568 | 0.9527 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9091 | 0.9222 | 0.9156 | 347 | 0.9414 | 0.9488 | 0.9451 | 0.9961 |
| 0.0935 | 8.05 | 1413 | 0.0369 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9569 | 0.9597 | 0.9583 | 347 | 0.9772 | 0.9885 | 0.9828 | 347 | 0.9143 | 0.9222 | 0.9182 | 347 | 0.9463 | 0.9524 | 0.9494 | 0.9960 |
| 0.038 | 9.05 | 1570 | 0.0343 | 0.9282 | 0.9308 | 0.9295 | 347 | 0.9624 | 0.9597 | 0.9610 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9206 | 0.9020 | 0.9112 | 347 | 0.9500 | 0.9452 | 0.9476 | 0.9958 |
| 0.038 | 10.05 | 1727 | 0.0317 | 0.9395 | 0.9395 | 0.9395 | 347 | 0.9598 | 0.9625 | 0.9612 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9280 | 0.9280 | 0.9280 | 347 | 0.9539 | 0.9546 | 0.9543 | 0.9963 |
| 0.038 | 11.05 | 1884 | 0.0312 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9514 | 0.9597 | 0.9555 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9226 | 0.9280 | 0.9253 | 347 | 0.9498 | 0.9539 | 0.9518 | 0.9960 |
| 0.0236 | 12.05 | 2041 | 0.0318 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9043 | 0.8991 | 0.9017 | 347 | 0.9467 | 0.9474 | 0.9471 | 0.9956 |
| 0.0236 | 13.05 | 2198 | 0.0291 | 0.9337 | 0.9337 | 0.9337 | 347 | 0.9598 | 0.9625 | 0.9612 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9164 | 0.9164 | 0.9164 | 347 | 0.9496 | 0.9503 | 0.9499 | 0.9960 |
| 0.0236 | 14.05 | 2355 | 0.0300 | 0.9286 | 0.9366 | 0.9326 | 347 | 0.9459 | 0.9568 | 0.9513 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9275 | 0.9222 | 0.9249 | 347 | 0.9476 | 0.9510 | 0.9493 | 0.9959 |
| 0.0178 | 15.05 | 2512 | 0.0307 | 0.9366 | 0.9366 | 0.9366 | 347 | 0.9513 | 0.9568 | 0.9540 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9275 | 0.9222 | 0.9249 | 347 | 0.9510 | 0.9510 | 0.9510 | 0.9959 |
| 0.0178 | 16.05 | 2669 | 0.0300 | 0.9312 | 0.9366 | 0.9339 | 347 | 0.9543 | 0.9625 | 0.9584 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9171 | 0.9251 | 0.9211 | 347 | 0.9477 | 0.9532 | 0.9504 | 0.9959 |
| 0.0178 | 17.05 | 2826 | 0.0292 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9253 | 0.9280 | 0.9266 | 347 | 0.9519 | 0.9546 | 0.9532 | 0.9961 |
| 0.0178 | 18.05 | 2983 | 0.0291 | 0.9341 | 0.9395 | 0.9368 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9253 | 0.9280 | 0.9266 | 347 | 0.9512 | 0.9546 | 0.9529 | 0.9961 |
| 0.0149 | 19.01 | 3000 | 0.0291 | 0.9341 | 0.9395 | 0.9368 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9253 | 0.9280 | 0.9266 | 347 | 0.9512 | 0.9546 | 0.9529 | 0.9961 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.0+cu101
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
BigSalmon/GPTNeo350MInformalToFormalLincoln6 | BigSalmon | 2022-03-02T02:29:46Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln6")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln6")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
``` |
vkmr/distilbert-base-uncased-finetuned-squad | vkmr | 2022-03-02T02:10:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2159 | 1.0 | 8235 | 1.2378 |
| 0.9389 | 2.0 | 16470 | 1.3452 |
| 0.7499 | 3.0 | 24705 | 1.4488 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln22 | BigSalmon | 2022-03-01T22:38:59Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln22")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln22")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
``` |
JAlexis/Bertv1_fine | JAlexis | 2022-03-01T22:33:49Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad2",
"dataset:cord19",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:04Z | ---
language: en
tags:
- pytorch
- question-answering
datasets:
- squad2
- cord19
metrics:
- f1
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19)."
- text: "How can I protect myself against covid-19?"
context: " "
---
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
'question': 'How can I protect myself against covid-19?',
'context': ' ',
}
nlp(inputs)
```
## Overview
```
Language model: deepset/bert-base-cased-squad2
Language: English
Downstream-task: Q&A
Datasets: CORD-19 from 31rd January 2022
Code: Haystack and FARM
Infrastructure: Tesla T4
```
## Hyperparameters
```
batch_size = 8
n_epochs = 7
max_seq_len = max_length
learning_rate = AdamW: 2e-5
```
|
Kevincp560/bart-large-finetuned-pubmed | Kevincp560 | 2022-03-01T18:35:04Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:pub_med_summarization_dataset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: bart-large-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 10.946
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8135
- Rouge1: 10.946
- Rouge2: 5.0933
- Rougel: 9.5608
- Rougelsum: 10.4259
- Gen Len: 19.0495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.0861 | 1.0 | 4000 | 1.8909 | 8.7344 | 3.6919 | 7.8804 | 8.3305 | 20.0 |
| 1.8996 | 2.0 | 8000 | 1.8261 | 10.2124 | 4.6212 | 8.9842 | 9.7417 | 17.632 |
| 1.7459 | 3.0 | 12000 | 1.8160 | 9.4933 | 4.4117 | 8.3977 | 9.0758 | 16.4775 |
| 1.6258 | 4.0 | 16000 | 1.8136 | 10.8248 | 5.0335 | 9.4286 | 10.3123 | 18.724 |
| 1.5214 | 5.0 | 20000 | 1.8135 | 10.946 | 5.0933 | 9.5608 | 10.4259 | 19.0495 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
ali2066/correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19 | ali2066 | 2022-03-01T14:55:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2711
- Precision: 0.3373
- Recall: 0.5670
- F1: 0.4230
- Accuracy: 0.8943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3783 | 0.1833 | 0.3975 | 0.2509 | 0.8413 |
| No log | 2.0 | 60 | 0.3021 | 0.3280 | 0.4820 | 0.3904 | 0.8876 |
| No log | 3.0 | 90 | 0.3196 | 0.3504 | 0.5036 | 0.4133 | 0.8918 |
| No log | 4.0 | 120 | 0.3645 | 0.3434 | 0.5306 | 0.4170 | 0.8759 |
| No log | 5.0 | 150 | 0.4027 | 0.3217 | 0.5486 | 0.4056 | 0.8797 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
SuperAI2-Machima/mt5-small-thai-qg-v2 | SuperAI2-Machima | 2022-03-01T14:53:52Z | 26 | 2 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question-generation",
"dataset:NSC2018",
"dataset:wiki-documents-nsc",
"dataset:ThaiQACorpus-DevelopmentDataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- question-generation
language:
- thai
- th
datasets:
- NSC2018
- wiki-documents-nsc
- ThaiQACorpus-DevelopmentDataset
widget:
- text: "โรงเรียนบ้านขุนด่าน ตั้งอยู่ที่ขุนด่าน จ.นครนายก </s>"
example_title: "Example 01"
- text: "พลเอก ประยุทธ์ จันทร์โอชา (เกิด 21 มีนาคม พ.ศ. 2497) ชื่อเล่น ตู่ เป็นนักการเมืองและอดีตนายทหารบกชาวไทย </s>"
example_title: "Example 02"
- text: "วันที่ 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น </s>"
example_title: "Example 03"
- text: "กรุงเทพมหานคร เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. </s>"
example_title: "Example 04"
license: mit
---
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/)
[Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2')
tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2')
source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น'
print('Predicted Summary Text : ')
tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device)
summary_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
max_length=50,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
#Predicted Summary Text :
#answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น
``` |
ali2066/correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21 | ali2066 | 2022-03-01T14:52:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1059
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1103 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 2.0 | 30 | 0.0842 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 3.0 | 45 | 0.0767 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 4.0 | 60 | 0.0754 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 5.0 | 75 | 0.0735 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47 | ali2066 | 2022-03-01T14:50:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1801
- Precision: 0.6153
- Recall: 0.7301
- F1: 0.6678
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2746 | 0.4586 | 0.5922 | 0.5169 | 0.9031 |
| No log | 2.0 | 22 | 0.2223 | 0.5233 | 0.6181 | 0.5668 | 0.9148 |
| No log | 3.0 | 33 | 0.2162 | 0.5335 | 0.6699 | 0.5940 | 0.9274 |
| No log | 4.0 | 44 | 0.2053 | 0.5989 | 0.7055 | 0.6478 | 0.9237 |
| No log | 5.0 | 55 | 0.2123 | 0.5671 | 0.7249 | 0.6364 | 0.9267 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47 | ali2066 | 2022-03-01T14:45:44Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3343
- Precision: 0.1651
- Recall: 0.3039
- F1: 0.2140
- Accuracy: 0.8493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4801 | 0.0352 | 0.0591 | 0.0441 | 0.7521 |
| No log | 2.0 | 60 | 0.3795 | 0.0355 | 0.0795 | 0.0491 | 0.8020 |
| No log | 3.0 | 90 | 0.3359 | 0.0591 | 0.1294 | 0.0812 | 0.8334 |
| No log | 4.0 | 120 | 0.3205 | 0.0785 | 0.1534 | 0.1039 | 0.8486 |
| No log | 5.0 | 150 | 0.3144 | 0.0853 | 0.1571 | 0.1105 | 0.8516 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29 | ali2066 | 2022-03-01T14:42:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- Precision: 0.2769
- Recall: 0.4391
- F1: 0.3396
- Accuracy: 0.8878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.4573 | 0.0094 | 0.0027 | 0.0042 | 0.7702 |
| No log | 2.0 | 22 | 0.3660 | 0.1706 | 0.3253 | 0.2239 | 0.8516 |
| No log | 3.0 | 33 | 0.3096 | 0.2339 | 0.408 | 0.2974 | 0.8827 |
| No log | 4.0 | 44 | 0.2868 | 0.2963 | 0.4693 | 0.3633 | 0.8928 |
| No log | 5.0 | 55 | 0.2798 | 0.3141 | 0.48 | 0.3797 | 0.8960 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16 | ali2066 | 2022-03-01T14:33:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2663
- Precision: 0.3644
- Recall: 0.4985
- F1: 0.4210
- Accuracy: 0.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.5174 | 0.0120 | 0.0061 | 0.0081 | 0.6997 |
| No log | 2.0 | 22 | 0.4029 | 0.1145 | 0.3098 | 0.1672 | 0.8265 |
| No log | 3.0 | 33 | 0.3604 | 0.2539 | 0.4448 | 0.3233 | 0.8632 |
| No log | 4.0 | 44 | 0.3449 | 0.2992 | 0.4755 | 0.3673 | 0.8704 |
| No log | 5.0 | 55 | 0.3403 | 0.3340 | 0.4816 | 0.3945 | 0.8760 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12 | ali2066 | 2022-03-01T14:25:30Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Precision: 0.3231
- Recall: 0.5151
- F1: 0.3971
- Accuracy: 0.8913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2881 | 0.2089 | 0.3621 | 0.2650 | 0.8715 |
| No log | 2.0 | 60 | 0.2500 | 0.2619 | 0.3842 | 0.3115 | 0.8845 |
| No log | 3.0 | 90 | 0.2571 | 0.2327 | 0.4338 | 0.3030 | 0.8809 |
| No log | 4.0 | 120 | 0.2479 | 0.3051 | 0.4761 | 0.3719 | 0.8949 |
| No log | 5.0 | 150 | 0.2783 | 0.3287 | 0.4761 | 0.3889 | 0.8936 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12 | ali2066 | 2022-03-01T14:22:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1290
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0733 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 2.0 | 30 | 0.0732 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 3.0 | 45 | 0.0731 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 4.0 | 60 | 0.0716 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 5.0 | 75 | 0.0635 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35 | ali2066 | 2022-03-01T14:20:06Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1832
- Precision: 0.6138
- Recall: 0.7169
- F1: 0.6613
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2740 | 0.4554 | 0.5460 | 0.4966 | 0.8943 |
| No log | 2.0 | 22 | 0.2189 | 0.5470 | 0.6558 | 0.5965 | 0.9193 |
| No log | 3.0 | 33 | 0.2039 | 0.5256 | 0.6706 | 0.5893 | 0.9198 |
| No log | 4.0 | 44 | 0.2097 | 0.5401 | 0.6795 | 0.6018 | 0.9237 |
| No log | 5.0 | 55 | 0.2255 | 0.6117 | 0.6825 | 0.6452 | 0.9223 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44 | ali2066 | 2022-03-01T14:12:43Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3082
- Precision: 0.2796
- Recall: 0.4373
- F1: 0.3411
- Accuracy: 0.8887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.5018 | 0.0192 | 0.0060 | 0.0091 | 0.7370 |
| No log | 2.0 | 22 | 0.4066 | 0.1541 | 0.2814 | 0.1992 | 0.8340 |
| No log | 3.0 | 33 | 0.3525 | 0.1768 | 0.3234 | 0.2286 | 0.8612 |
| No log | 4.0 | 44 | 0.3250 | 0.2171 | 0.3503 | 0.2680 | 0.8766 |
| No log | 5.0 | 55 | 0.3160 | 0.2353 | 0.3713 | 0.2880 | 0.8801 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_00_35 | ali2066 | 2022-03-01T14:02:32Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_00_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_00_35
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1155
- Precision: 0.5720
- Recall: 0.4705
- F1: 0.5163
- Accuracy: 0.9687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1256 | 0.04 | 0.0021 | 0.0039 | 0.9624 |
| No log | 2.0 | 30 | 0.0963 | 0.7121 | 0.5711 | 0.6339 | 0.9794 |
| No log | 3.0 | 45 | 0.0844 | 0.6205 | 0.5732 | 0.5959 | 0.9778 |
| No log | 4.0 | 60 | 0.0770 | 0.6201 | 0.5856 | 0.6023 | 0.9778 |
| No log | 5.0 | 75 | 0.0750 | 0.6174 | 0.5856 | 0.6011 | 0.9777 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21 | ali2066 | 2022-03-01T13:44:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1212
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 15 | 0.1113 | 0.0 | 0.0 | 0.0 | 0.9752 |
| No log | 2.0 | 30 | 0.1069 | 0.0 | 0.0 | 0.0 | 0.9752 |
| No log | 3.0 | 45 | 0.0992 | 0.0 | 0.0 | 0.0 | 0.9752 |
| No log | 4.0 | 60 | 0.0938 | 0.0 | 0.0 | 0.0 | 0.9752 |
| No log | 5.0 | 75 | 0.0920 | 0.0 | 0.0 | 0.0 | 0.9752 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35 | ali2066 | 2022-03-01T13:39:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3190
- Precision: 0.1194
- Recall: 0.2563
- F1: 0.1629
- Accuracy: 0.8546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4963 | 0.0223 | 0.0562 | 0.0319 | 0.7461 |
| No log | 2.0 | 60 | 0.4089 | 0.0617 | 0.1359 | 0.0849 | 0.8093 |
| No log | 3.0 | 90 | 0.3919 | 0.1053 | 0.2101 | 0.1403 | 0.8219 |
| No log | 4.0 | 120 | 0.3787 | 0.1202 | 0.2482 | 0.1619 | 0.8270 |
| No log | 5.0 | 150 | 0.3745 | 0.1171 | 0.2391 | 0.1572 | 0.8311 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33 | ali2066 | 2022-03-01T13:35:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3255
- Precision: 0.1412
- Recall: 0.25
- F1: 0.1805
- Accuracy: 0.8491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4549 | 0.0228 | 0.0351 | 0.0276 | 0.7734 |
| No log | 2.0 | 60 | 0.3577 | 0.0814 | 0.1260 | 0.0989 | 0.8355 |
| No log | 3.0 | 90 | 0.3116 | 0.1534 | 0.2648 | 0.1943 | 0.8611 |
| No log | 4.0 | 120 | 0.2975 | 0.1792 | 0.2967 | 0.2234 | 0.8690 |
| No log | 5.0 | 150 | 0.2935 | 0.1873 | 0.2998 | 0.2305 | 0.8715 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
coastalcph/fairlex-scotus-minilm | coastalcph | 2022-03-01T13:24:01Z | 12 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"legal",
"fairlex",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: en
pipeline_tag: fill-mask
license: cc-by-nc-sa-4.0
tags:
- legal
- fairlex
widget:
- text: "Because the Court granted <mask> before judgment, the Court effectively stands in the shoes of the Court of Appeals and reviews the defendants’ appeals."
---
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-scotus-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-scotus-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) | |
spy24/autonlp-US-to-UK-604417040 | spy24 | 2022-03-01T13:16:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:spy24/autonlp-data-US-to-UK",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-US-to-UK
co2_eq_emissions: 3.3271667948644614
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 604417040
- CO2 Emissions (in grams): 3.3271667948644614
## Validation Metrics
- Loss: 1.919085144996643
- Rouge1: 39.2808
- Rouge2: 4.905
- RougeL: 39.113
- RougeLsum: 39.1463
- Gen Len: 3.4611
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-US-to-UK-604417040
``` |
coastalcph/fairlex-cail-minilm | coastalcph | 2022-03-01T13:12:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"legal",
"fairlex",
"zh",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: zh
pipeline_tag: fill-mask
license: cc-by-nc-sa-4.0
tags:
- legal
- fairlex
widget:
- text: "上述事实,被告人在庭审过程中亦无异议,且有<mask>的陈述,现场辨认笔录及照片,被告人的前科刑事判决书,释放证明材料,抓获经过,被告人的供述及身份证明等证据证实,足以认定。"
---
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-cail-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-cail-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) | |
ali2066/finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32 | ali2066 | 2022-03-01T12:31:32Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4787
- Accuracy: 0.8138
- F1: 0.8785
- Precision: 0.8489
- Recall: 0.9101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.4335 | 0.7732 | 0.8533 | 0.8209 | 0.8883 |
| 0.5141 | 2.0 | 780 | 0.4196 | 0.8037 | 0.8721 | 0.8446 | 0.9015 |
| 0.3368 | 3.0 | 1170 | 0.4519 | 0.8098 | 0.8779 | 0.8386 | 0.9212 |
| 0.2677 | 4.0 | 1560 | 0.4787 | 0.8122 | 0.8785 | 0.8452 | 0.9146 |
| 0.2677 | 5.0 | 1950 | 0.4912 | 0.8146 | 0.8794 | 0.8510 | 0.9097 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55 | ali2066 | 2022-03-01T12:20:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7224
- Accuracy: 0.6979
- F1: 0.4736
- Precision: 0.5074
- Recall: 0.4440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 95 | 0.6009 | 0.65 | 0.2222 | 0.625 | 0.1351 |
| No log | 2.0 | 190 | 0.6140 | 0.675 | 0.3689 | 0.6552 | 0.2568 |
| No log | 3.0 | 285 | 0.6580 | 0.67 | 0.4590 | 0.5833 | 0.3784 |
| No log | 4.0 | 380 | 0.7560 | 0.665 | 0.4806 | 0.5636 | 0.4189 |
| No log | 5.0 | 475 | 0.8226 | 0.665 | 0.464 | 0.5686 | 0.3919 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
huggingtweets/berniesanders-dril | huggingtweets | 2022-03-01T10:13:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Bernie Sanders</div>
<div style="text-align: center; font-size: 14px;">@berniesanders-dril</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Bernie Sanders.
| Data | wint | Bernie Sanders |
| --- | --- | --- |
| Tweets downloaded | 3229 | 3250 |
| Retweets | 473 | 429 |
| Short tweets | 300 | 10 |
| Tweets kept | 2456 | 2811 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yw6378l1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-dril's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/berniesanders-dril')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/berniesanders-coffee__burger-sensanders | huggingtweets | 2022-03-01T09:49:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/794725967948181506/Zn4x_F6i_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/794619281271033856/Fs0QQaH7_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Coffee Burger & Bernie Sanders & Bernie Sanders</div>
<div style="text-align: center; font-size: 14px;">@berniesanders-coffee__burger-sensanders</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Coffee Burger & Bernie Sanders & Bernie Sanders.
| Data | Coffee Burger | Bernie Sanders | Bernie Sanders |
| --- | --- | --- | --- |
| Tweets downloaded | 2471 | 3249 | 3250 |
| Retweets | 525 | 296 | 429 |
| Short tweets | 337 | 5 | 10 |
| Tweets kept | 1609 | 2948 | 2811 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2k4t7tx8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-coffee__burger-sensanders's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31ey7s5h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31ey7s5h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/berniesanders-coffee__burger-sensanders')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hfl/chinese-roberta-wwm-ext | hfl | 2022-03-01T09:13:56Z | 279,957 | 306 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"arxiv:1906.08101",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
# Please use 'Bert' related functions to load this model!
## Chinese BERT with Whole Word Masking
For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
This repository is developed based on:https://github.com/google-research/bert
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
- Primary: https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
- Secondary: https://arxiv.org/abs/1906.08101
```
@article{chinese-bert-wwm,
title={Pre-Training with Whole Word Masking for Chinese BERT},
author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
journal={arXiv preprint arXiv:1906.08101},
year={2019}
}
``` |
nguyenvulebinh/spoken-norm-taggen | nguyenvulebinh | 2022-03-01T09:10:45Z | 2 | 1 | transformers | [
"transformers",
"pytorch",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
license: cc-by-nc-4.0
---
|
huggingtweets/coffee__burger | huggingtweets | 2022-03-01T09:06:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/coffee__burger/1646125569654/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/794725967948181506/Zn4x_F6i_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Coffee Burger</div>
<div style="text-align: center; font-size: 14px;">@coffee__burger</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Coffee Burger.
| Data | Coffee Burger |
| --- | --- |
| Tweets downloaded | 2471 |
| Retweets | 525 |
| Short tweets | 337 |
| Tweets kept | 1609 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ad82qis/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @coffee__burger's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kxzm2oz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kxzm2oz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/coffee__burger')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
inovex/multi2convai-quality-it-mbert | inovex | 2022-03-01T09:02:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "Avviare il programma"
license: mit
language: it
---
# Multi2ConvAI-Quality: finetuned MBert for Italian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (it)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-it-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-it-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-quality-fr-mbert | inovex | 2022-03-01T09:01:51Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "Lancer le programme"
license: mit
language: fr
---
# Multi2ConvAI-Quality: finetuned MBert for French
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: French (fr)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-quality-en-mbert | inovex | 2022-03-01T09:01:15Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "Start the program"
license: mit
language: en
---
# Multi2ConvAI-Quality: finetuned MBert for English
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-en-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-en-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-quality-de-mbert | inovex | 2022-03-01T09:00:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "Starte das Programm"
license: mit
language: de
---
# Multi2ConvAI-Quality: finetuned MBert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-quality-de-bert | inovex | 2022-03-01T09:00:15Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "Starte das Programm"
license: mit
language: de
---
# Multi2ConvAI-Quality: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-logistics-tr-bert | inovex | 2022-03-01T08:54:59Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "paketi nereye koyabilirim?"
license: mit
language: tr
---
# Multi2ConvAI-Logistics: finetuned Bert for Turkish
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Turkish (tr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-tr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-tr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-logistics-pl-bert | inovex | 2022-03-01T08:54:40Z | 8 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"pl",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "gdzie mogę umieścić paczkę?"
license: mit
language: pl
---
# Multi2ConvAI-Logistics: finetuned Bert for Polish
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Polish (pl)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-pl-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-pl-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-logistics-en-bert | inovex | 2022-03-01T08:53:59Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "Where can I put the parcel?"
license: mit
language: en
---
# Multi2ConvAI-Logistics: finetuned Bert for English
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-en-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-en-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
armageddon/distilbert-base-uncased-squad2-covid-qa-deepset | armageddon | 2022-03-01T08:32:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: distilbert-base-uncased-squad2-covid-qa-deepset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-covid-qa-deepset
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
aasem/wav2vec2-xls-r-300m-Urdu | aasem | 2022-03-01T08:28:25Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
datasets:
-
common_voice: ~
language:
-
ur: ~
library_name:
transformers: ~
license:
mit: ~
metrics:
-
wer: ~
model-index:
-
name:
wav2vec2-xls-r-300m-Urdu: ~
results:
-
task:
dataset:
args:
ur: ~
name:
: "common_voice"
: ~
type:
common_voice: ~
metrics:
-
type:
wer: ~
value:
0.2459: ~
-
type:
cer: ~
value:
0.0691: ~
type:
automatic-speech-recognition: ~
tags:
-
audio: ~
-
automatic-speech-recognition: ~
-
speech: ~
Finetuning of [Facebook's 300M model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Common Voice 8.0 Urdu dataset |
huggingtweets/_deep_winter_ | huggingtweets | 2022-03-01T07:42:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/_deep_winter_/1646120552069/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1344880990464991239/DJ6glcyj_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">erin.</div>
<div style="text-align: center; font-size: 14px;">@_deep_winter_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from erin..
| Data | erin. |
| --- | --- |
| Tweets downloaded | 3147 |
| Retweets | 716 |
| Short tweets | 243 |
| Tweets kept | 2188 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bgxbc1v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_deep_winter_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dlbw7vo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dlbw7vo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_deep_winter_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Sarahliu186/wav2vec2-base-timit-demo-colab | Sarahliu186 | 2022-03-01T04:01:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27 | ali2066 | 2022-03-01T03:51:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2899
- Precision: 0.3170
- Recall: 0.5261
- F1: 0.3956
- Accuracy: 0.8799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2912 | 0.2752 | 0.4444 | 0.3400 | 0.8730 |
| No log | 2.0 | 60 | 0.2772 | 0.4005 | 0.4589 | 0.4277 | 0.8911 |
| No log | 3.0 | 90 | 0.2267 | 0.3642 | 0.5281 | 0.4311 | 0.9043 |
| No log | 4.0 | 120 | 0.2129 | 0.3617 | 0.5455 | 0.4350 | 0.9140 |
| No log | 5.0 | 150 | 0.2399 | 0.3797 | 0.5556 | 0.4511 | 0.9114 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51 | ali2066 | 2022-03-01T02:20:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4563
- Accuracy: 0.8440
- F1: 0.8954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4302 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3970 | 0.8220 | 0.8875 |
| 0.3703 | 3.0 | 585 | 0.3972 | 0.8402 | 0.8934 |
| 0.3703 | 4.0 | 780 | 0.4945 | 0.8390 | 0.8935 |
| 0.3703 | 5.0 | 975 | 0.5354 | 0.8305 | 0.8898 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
armageddon/albert-squad-v2-covid-qa-deepset | armageddon | 2022-03-01T02:04:26Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: covid_qa_analysis_albert_base_squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_albert_base_squad_v2
This model is a fine-tuned version of [abhilash1910/albert-squad-v2](https://huggingface.co/abhilash1910/albert-squad-v2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
nateraw/cryptopunks-gan | nateraw | 2022-03-01T01:59:49Z | 0 | 3 | pytorch | [
"pytorch",
"tensorboard",
"dcgan",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
library_name: pytorch
tags:
- dcgan
---
# cryptopunks-gan
A DCGAN trained to generate novel Cryptopunks.
Check out the code by Teddy Koker [here](https://github.com/teddykoker/cryptopunks-gan).
## Generated Punks
Here are some punks generated by this model:

## Usage
You can try it out yourself, or you can play with the [demo](https://huggingface.co/spaces/nateraw/cryptopunks-generator).
To use it yourself - make sure you have `torch`, `torchvision`, and `huggingface_hub` installed. Then, run the following to generate a grid of 64 random punks:
```python
import torch
from huggingface_hub import hf_hub_download
from torch import nn
from torchvision.utils import save_image
class Generator(nn.Module):
def __init__(self, nc=4, nz=100, ngf=64):
super(Generator, self).__init__()
self.network = nn.Sequential(
nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh(),
)
def forward(self, input):
output = self.network(input)
return output
model = Generator()
weights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')
model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu')))
out = model(torch.randn(64, 100, 1, 1))
save_image(out, "punks.png", normalize=True)
```
|
Ayham/albert_roberta_summarization_cnn_dailymail | Ayham | 2022-03-01T01:54:22Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: albert_roberta_new_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_roberta_new_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Msp/classifier | Msp | 2022-02-28T22:02:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04Z | ---
license: apache-2.0
---
|
Akash7897/gpt2-wikitext2 | Akash7897 | 2022-02-28T19:32:20Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.558 | 1.0 | 2249 | 6.4672 |
| 6.1918 | 2.0 | 4498 | 6.1970 |
| 6.0019 | 3.0 | 6747 | 6.1079 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Kevincp560/bart-large-cnn-finetuned-pubmed | Kevincp560 | 2022-02-28T19:04:22Z | 5 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:pub_med_summarization_dataset",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 40.4866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8416
- Rouge1: 40.4866
- Rouge2: 16.7472
- Rougel: 24.9831
- Rougelsum: 36.4002
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.932 | 1.0 | 4000 | 1.8110 | 38.1151 | 15.2255 | 23.4286 | 34.2521 | 141.8905 |
| 1.7001 | 2.0 | 8000 | 1.7790 | 39.8217 | 16.3042 | 24.649 | 35.831 | 142.0 |
| 1.5 | 3.0 | 12000 | 1.7971 | 40.6108 | 17.0446 | 25.1977 | 36.5556 | 141.9865 |
| 1.3316 | 4.0 | 16000 | 1.8106 | 40.0466 | 16.4851 | 24.7094 | 36.0998 | 141.9335 |
| 1.1996 | 5.0 | 20000 | 1.8416 | 40.4866 | 16.7472 | 24.9831 | 36.4002 | 142.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
peterhsu/mt5-small-finetuned-amazon-en-es | peterhsu | 2022-02-28T18:40:06Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0255
- Rouge1: 17.5202
- Rouge2: 8.4634
- Rougel: 17.0175
- Rougelsum: 17.0528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 8.094 | 1.0 | 1209 | 3.2933 | 12.7563 | 5.2606 | 12.4786 | 12.4961 |
| 3.9263 | 2.0 | 2418 | 3.1487 | 16.2314 | 8.4716 | 15.6854 | 15.7506 |
| 3.599 | 3.0 | 3627 | 3.0789 | 16.9233 | 8.1928 | 16.2596 | 16.2522 |
| 3.429 | 4.0 | 4836 | 3.0492 | 17.2679 | 8.7561 | 16.6685 | 16.7399 |
| 3.3279 | 5.0 | 6045 | 3.0384 | 17.6081 | 8.6721 | 17.0546 | 17.0368 |
| 3.2518 | 6.0 | 7254 | 3.0343 | 17.2271 | 8.504 | 16.6285 | 16.6209 |
| 3.2084 | 7.0 | 8463 | 3.0255 | 16.7859 | 8.054 | 16.2574 | 16.2853 |
| 3.1839 | 8.0 | 9672 | 3.0255 | 17.5202 | 8.4634 | 17.0175 | 17.0528 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Kiran146/distilbert-base-uncased-finetuned-emotion | Kiran146 | 2022-02-28T17:30:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9227765339978083
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Accuracy: 0.9225
- F1: 0.9228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.84 | 1.0 | 250 | 0.3133 | 0.909 | 0.9070 |
| 0.2459 | 2.0 | 500 | 0.2224 | 0.9225 | 0.9228 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Visual-Attention-Network/VAN-Base-original | Visual-Attention-Network | 2022-02-28T16:34:32Z | 0 | 0 | null | [
"image-classification",
"dataset:imagenet",
"arxiv:2202.09741",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# VAN-Base
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
## Description
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
## Evaluation Results
| Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download |
| :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: |
| VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) |
| VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) |
| VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), |
| VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
### BibTeX entry and citation info
```bibtex
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}
``` |
Visual-Attention-Network/VAN-Small-original | Visual-Attention-Network | 2022-02-28T16:33:16Z | 0 | 0 | null | [
"image-classification",
"dataset:imagenet",
"arxiv:2202.09741",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# VAN-Small
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
## Description
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
## Evaluation Results
| Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download |
| :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: |
| VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) |
| VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) |
| VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), |
| VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
### BibTeX entry and citation info
```bibtex
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}
``` |
mohamed-illiyas/wav2vec-malayalam | mohamed-illiyas | 2022-02-28T16:07:13Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-malayalam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-malayalam
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 1.18.3
- Tokenizers 0.10.3
|
EngNada/wav2vec2-large-xlsr-53-demo-colab | EngNada | 2022-02-28T15:47:56Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 7.9807
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 22.8021 | 1.78 | 80 | 7.9807 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
inovex/multi2convai-quality-en-logreg-ft | inovex | 2022-02-28T13:42:54Z | 0 | 0 | null | [
"text-classification",
"en",
"license:mit",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "Hosted inference API not supported"
license: mit
language: en
---
# Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: logistic regression
- embeddings: fastText embeddings
## How to run
Requires:
- [multi2convai](https://github.com/inovex/multi2convai)
- serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md))
### Run with one line of code
After installing `multi2convai` and locally available fastText embeddings you can run:
````bash
# assumes working dir is the root of the cloned multi2convai repo
python scripts/run_inference.py -m multi2convai-quality-en-logreg-ft
>>> Create pipeline for config: multi2convai-quality-en-logreg-ft.
>>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'en'.
>>>
>>> Enter your text (type 'stop' to end execution): Start the program
>>> 'Start the program' was classified as 'neo.start' (confidence: 0.8943)
````
### How to run model using multi2convai
After installing `multi2convai` and locally available fastText embeddings you can run:
````python
# assumes working dir is the root of the cloned multi2convai repo
from pathlib import Path
from multi2convai.pipelines.inference.base import ClassificationConfig
from multi2convai.pipelines.inference.logistic_regression_fasttext import (
LogisticRegressionFasttextConfig,
LogisticRegressionFasttextPipeline,
)
language = "en"
domain = "quality"
# 1. Define paths of model, label dict and embeddings
model_file = "model.pth"
label_dict_file = "label_dict.json"
embedding_path = Path(
f"../models/embeddings/fasttext/en/wiki.200k.en.embed"
)
vocabulary_path = Path(
f"../models/embeddings/fasttext/en/wiki.200k.en.vocab"
)
# 2. Create and setup pipeline
model_config = LogisticRegressionFasttextConfig(
model_file, embedding_path, vocabulary_path
)
config = ClassificationConfig(language, domain, label_dict_file, model_config)
pipeline = LogisticRegressionFasttextPipeline(config)
pipeline.setup()
# 3. Run intent classification on a text of your choice
label = pipeline.run("Start the program")
label
>>> Label(string='neo.start', ratio='0.8943')
````
### Download and serialize fastText
````bash
# assumes working dir is the root of the cloned multi2convai repo
mkdir models/fasttext/en
curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.vec --output models/fasttext/en/wiki.en.vec
python scripts/serialize_fasttext.py -r fasttext/wiki.en.vec -v fasttext/en/wiki.200k.en.vocab -e fasttext/en/wiki.200k.en.embed -n 200000
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-quality-it-logreg-ft | inovex | 2022-02-28T13:42:18Z | 0 | 0 | null | [
"text-classification",
"it",
"license:mit",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "Hosted inference API not supported"
license: mit
language: it
---
# Multi2ConvAI-Quality: Italian logistic regression model using fasttext embeddings
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (ml)
- model type: logistic regression
- embeddings: fastText embeddings
## How to run
Requires:
- [multi2convai](https://github.com/inovex/multi2convai)
- serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md))
### Run with one line of code
After installing `multi2convai` and locally available fastText embeddings you can run:
````bash
# assumes working dir is the root of the cloned multi2convai repo
python scripts/run_inference.py -m multi2convai-quality-it-logreg-ft
>>> Create pipeline for config: multi2convai-quality-it-logreg-ft.
>>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'it'.
>>>
>>> Enter your text (type 'stop' to end execution): Avviare il programma
>>> 'Avviare il programma' was classified as 'neo.start' (confidence: 0.8943)
````
### How to run model using multi2convai
After installing `multi2convai` and locally available fastText embeddings you can run:
````python
# assumes working dir is the root of the cloned multi2convai repo
from pathlib import Path
from multi2convai.pipelines.inference.base import ClassificationConfig
from multi2convai.pipelines.inference.logistic_regression_fasttext import (
LogisticRegressionFasttextConfig,
LogisticRegressionFasttextPipeline,
)
language = "it"
domain = "quality"
# 1. Define paths of model, label dict and embeddings
model_file = "model.pth"
label_dict_file = "label_dict.json"
embedding_path = Path(
f"../models/embeddings/fasttext/it/wiki.200k.it.embed"
)
vocabulary_path = Path(
f"../models/embeddings/fasttext/it/wiki.200k.it.vocab"
)
# 2. Create and setup pipeline
model_config = LogisticRegressionFasttextConfig(
model_file, embedding_path, vocabulary_path
)
config = ClassificationConfig(language, domain, label_dict_file, model_config)
pipeline = LogisticRegressionFasttextPipeline(config)
pipeline.setup()
# 3. Run intent classification on a text of your choice
label = pipeline.run("Avviare il programma")
label
>>> Label(string='neo.start', ratio='0.8943')
````
### Download and serialize fastText
````bash
# assumes working dir is the root of the cloned multi2convai repo
mkdir models/fasttext/it
curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.it.vec --output models/fasttext/it/wiki.it.vec
python scripts/serialize_fasttext.py -r fasttext/wiki.it.vec -v fasttext/it/wiki.200k.it.vocab -e fasttext/it/wiki.200k.it.embed -n 200000
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-logistics-en-logreg-ft | inovex | 2022-02-28T12:36:40Z | 0 | 0 | null | [
"text-classification",
"en",
"license:mit",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
widget:
- text: "Hosted inference API not supported"
license: mit
language: en
---
# Multi2ConvAI-Logistics: English logistic regression model using fasttext embeddings
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: logistic regression
- embeddings: fastText embeddings
## How to run
Requires:
- [multi2convai](https://github.com/inovex/multi2convai)
- serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md))
### Run with one line of code
After installing `multi2convai` and locally available fastText embeddings you can run:
````bash
# assumes working dir is the root of the cloned multi2convai repo
python scripts/run_inference.py -m multi2convai-logistics-en-logreg-ft
>>> Create pipeline for config: multi2convai-logistics-en-logreg-ft.
>>> Created a LogisticRegressionFasttextPipeline for domain: 'logistics' and language 'en'.
>>>
>>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen?
>>> 'Where can I put the parcel?' was classified as 'details.safeplace' (confidence: 0.8943)
````
### How to run model using multi2convai
After installing `multi2convai` and locally available fastText embeddings you can run:
````python
# assumes working dir is the root of the cloned multi2convai repo
from pathlib import Path
from multi2convai.pipelines.inference.base import ClassificationConfig
from multi2convai.pipelines.inference.logistic_regression_fasttext import (
LogisticRegressionFasttextConfig,
LogisticRegressionFasttextPipeline,
)
language = "en"
domain = "logistics"
# 1. Define paths of model, label dict and embeddings
model_file = "model.pth"
label_dict_file = "label_dict.json"
embedding_path = Path(
f"../models/embeddings/fasttext/en/wiki.200k.en.embed"
)
vocabulary_path = Path(
f"../models/embeddings/fasttext/en/wiki.200k.en.vocab"
)
# 2. Create and setup pipeline
model_config = LogisticRegressionFasttextConfig(
model_file, embedding_path, vocabulary_path
)
config = ClassificationConfig(language, domain, label_dict_file, model_config)
pipeline = LogisticRegressionFasttextPipeline(config)
pipeline.setup()
# 3. Run intent classification on a text of your choice
label = pipeline.run("Where can I put the parcel?")
label
>>> Label(string='details.safeplace', ratio='0.8943')
````
### Download and serialize fastText
````bash
# assumes working dir is the root of the cloned multi2convai repo
mkdir models/fasttext/en
curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.vec --output models/fasttext/en/wiki.en.vec
python scripts/serialize_fasttext.py -r fasttext/wiki.en.vec -v fasttext/en/wiki.200k.en.vocab -e fasttext/en/wiki.200k.en.embed -n 200000
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
cnicu/pegasus-large-booksum | cnicu | 2022-02-28T12:12:37Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"dataset:kmfoda/booksum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- summarization
datasets:
- kmfoda/booksum
---
|
peterhsu/marian-finetuned-kde4-en-to-zh_TW | peterhsu | 2022-02-28T11:26:43Z | 13 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-zh_TW
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-zh_TW
metrics:
- name: Bleu
type: bleu
value: 39.086345838465
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh_TW
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0047
- Bleu: 39.0863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
spy24/autonlp-AUS-to-US-601516964 | spy24 | 2022-02-28T11:21:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:spy24/autonlp-data-AUS-to-US",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-AUS-to-US
co2_eq_emissions: 3.3930796843275846
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 601516964
- CO2 Emissions (in grams): 3.3930796843275846
## Validation Metrics
- Loss: 1.9823806285858154
- Rouge1: 42.8783
- Rouge2: 7.4603
- RougeL: 42.8492
- RougeLsum: 43.0556
- Gen Len: 2.8952
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-AUS-to-US-601516964
``` |
Subsets and Splits