modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
saattrupdan/verdict-classifier | a305d4792587f3461c82dcd96632dffa2e406ccf | 2021-10-27T15:00:47.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"am",
"ar",
"hy",
"eu",
"bn",
"bs",
"bg",
"my",
"hr",
"ca",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"ka",
"de",
"el",
"gu",
"ht",
"iw",
"hi",
"hu",
"is",
"in",
"it",
"ja",
"kn",
"km",
"ko",
"lo",
"lv",
"lt",
"ml",
"mr",
"ne",
"no",
"or",
"pa",
"ps",
"fa",
"pl",
"pt",
"ro",
"ru",
"sr",
"zh",
"sd",
"si",
"sk",
"sl",
"es",
"sv",
"tl",
"ta",
"te",
"th",
"tr",
"uk",
"ur",
"ug",
"vi",
"cy",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | saattrupdan | null | saattrupdan/verdict-classifier | 284 | 2 | transformers | 3,100 | ---
license: mit
language:
- am
- ar
- hy
- eu
- bn
- bs
- bg
- my
- hr
- ca
- cs
- da
- nl
- en
- et
- fi
- fr
- ka
- de
- el
- gu
- ht
- iw
- hi
- hu
- is
- in
- it
- ja
- kn
- km
- ko
- lo
- lv
- lt
- ml
- mr
- ne
- no
- or
- pa
- ps
- fa
- pl
- pt
- ro
- ru
- sr
- zh
- sd
- si
- sk
- sl
- es
- sv
- tl
- ta
- te
- th
- tr
- uk
- ur
- ug
- vi
- cy
tags:
- generated_from_trainer
model-index:
- name: verdict-classifier-en
results:
- task:
type: text-classification
name: Verdict Classification
widget:
- "本文已断章取义。"
---
# Multilingual Verdict Classifier
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on 2,500 deduplicated multilingual verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into 65 languages with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/).
It achieves the following results on the evaluation set, being 1,000 such verdicts, but here including duplicates to represent the true distribution:
- Loss: 0.2238
- F1 Macro: 0.8540
- F1 Misinformation: 0.9798
- F1 Factual: 0.9889
- F1 Other: 0.5934
- Prec Macro: 0.8348
- Prec Misinformation: 0.9860
- Prec Factual: 0.9889
- Prec Other: 0.5294
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 162525
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
| 1.1109 | 0.1 | 2000 | 1.2166 | 0.0713 | 0.1497 | 0.0 | 0.0640 | 0.2451 | 0.7019 | 0.0 | 0.0334 |
| 0.9551 | 0.2 | 4000 | 0.7801 | 0.3611 | 0.8889 | 0.0 | 0.1943 | 0.3391 | 0.8915 | 0.0 | 0.1259 |
| 0.9275 | 0.3 | 6000 | 0.7712 | 0.3468 | 0.9123 | 0.0 | 0.1282 | 0.3304 | 0.9051 | 0.0 | 0.0862 |
| 0.8881 | 0.39 | 8000 | 0.5386 | 0.3940 | 0.9524 | 0.0 | 0.2297 | 0.3723 | 0.9748 | 0.0 | 0.1420 |
| 0.7851 | 0.49 | 10000 | 0.3298 | 0.6886 | 0.9626 | 0.7640 | 0.3393 | 0.6721 | 0.9798 | 0.7727 | 0.2639 |
| 0.639 | 0.59 | 12000 | 0.2156 | 0.7847 | 0.9633 | 0.9355 | 0.4554 | 0.7540 | 0.9787 | 0.9062 | 0.3770 |
| 0.5677 | 0.69 | 14000 | 0.1682 | 0.7877 | 0.9694 | 0.9667 | 0.4270 | 0.7763 | 0.9745 | 0.9667 | 0.3878 |
| 0.5218 | 0.79 | 16000 | 0.1475 | 0.8037 | 0.9692 | 0.9667 | 0.4752 | 0.7804 | 0.9812 | 0.9667 | 0.3934 |
| 0.4682 | 0.89 | 18000 | 0.1458 | 0.8097 | 0.9734 | 0.9667 | 0.4889 | 0.7953 | 0.9791 | 0.9667 | 0.44 |
| 0.4188 | 0.98 | 20000 | 0.1416 | 0.8370 | 0.9769 | 0.9724 | 0.5618 | 0.8199 | 0.9826 | 0.9670 | 0.5102 |
| 0.3735 | 1.08 | 22000 | 0.1624 | 0.8094 | 0.9698 | 0.9368 | 0.5217 | 0.7780 | 0.9823 | 0.89 | 0.4615 |
| 0.3242 | 1.18 | 24000 | 0.1648 | 0.8338 | 0.9769 | 0.9727 | 0.5517 | 0.8167 | 0.9826 | 0.9570 | 0.5106 |
| 0.2785 | 1.28 | 26000 | 0.1843 | 0.8261 | 0.9739 | 0.9780 | 0.5263 | 0.8018 | 0.9836 | 0.9674 | 0.4545 |
| 0.25 | 1.38 | 28000 | 0.1975 | 0.8344 | 0.9744 | 0.9834 | 0.5455 | 0.8072 | 0.9859 | 0.9780 | 0.4576 |
| 0.2176 | 1.48 | 30000 | 0.1849 | 0.8209 | 0.9691 | 0.9889 | 0.5047 | 0.7922 | 0.9846 | 0.9889 | 0.4030 |
| 0.1966 | 1.58 | 32000 | 0.2119 | 0.8194 | 0.9685 | 0.9944 | 0.4954 | 0.7920 | 0.9846 | 1.0 | 0.3913 |
| 0.1738 | 1.67 | 34000 | 0.2110 | 0.8352 | 0.9708 | 0.9944 | 0.5405 | 0.8035 | 0.9881 | 1.0 | 0.4225 |
| 0.1625 | 1.77 | 36000 | 0.2152 | 0.8165 | 0.9709 | 0.9834 | 0.4950 | 0.7905 | 0.9835 | 0.9780 | 0.4098 |
| 0.1522 | 1.87 | 38000 | 0.2300 | 0.8097 | 0.9697 | 0.9832 | 0.4762 | 0.7856 | 0.9835 | 0.9888 | 0.3846 |
| 0.145 | 1.97 | 40000 | 0.1955 | 0.8519 | 0.9774 | 0.9889 | 0.5895 | 0.8280 | 0.9860 | 0.9889 | 0.5091 |
| 0.1248 | 2.07 | 42000 | 0.2308 | 0.8149 | 0.9703 | 0.9889 | 0.4854 | 0.7897 | 0.9835 | 0.9889 | 0.3968 |
| 0.1186 | 2.17 | 44000 | 0.2368 | 0.8172 | 0.9733 | 0.9834 | 0.4948 | 0.7942 | 0.9836 | 0.9780 | 0.4211 |
| 0.1122 | 2.26 | 46000 | 0.2401 | 0.7968 | 0.9804 | 0.8957 | 0.5143 | 0.8001 | 0.9849 | 1.0 | 0.4154 |
| 0.1099 | 2.36 | 48000 | 0.2290 | 0.8119 | 0.9647 | 0.9834 | 0.4874 | 0.7777 | 0.9880 | 0.9780 | 0.3671 |
| 0.1093 | 2.46 | 50000 | 0.2256 | 0.8247 | 0.9745 | 0.9889 | 0.5106 | 0.8053 | 0.9825 | 0.9889 | 0.4444 |
| 0.1053 | 2.56 | 52000 | 0.2416 | 0.8456 | 0.9799 | 0.9889 | 0.5679 | 0.8434 | 0.9805 | 0.9889 | 0.5610 |
| 0.1049 | 2.66 | 54000 | 0.2850 | 0.7585 | 0.9740 | 0.8902 | 0.4112 | 0.7650 | 0.9802 | 0.9865 | 0.3284 |
| 0.098 | 2.76 | 56000 | 0.2828 | 0.8049 | 0.9642 | 0.9889 | 0.4615 | 0.7750 | 0.9856 | 0.9889 | 0.3506 |
| 0.0962 | 2.86 | 58000 | 0.2238 | 0.8540 | 0.9798 | 0.9889 | 0.5934 | 0.8348 | 0.9860 | 0.9889 | 0.5294 |
| 0.0975 | 2.95 | 60000 | 0.2494 | 0.8249 | 0.9715 | 0.9889 | 0.5143 | 0.7967 | 0.9858 | 0.9889 | 0.4154 |
| 0.0877 | 3.05 | 62000 | 0.2464 | 0.8274 | 0.9733 | 0.9889 | 0.5200 | 0.8023 | 0.9847 | 0.9889 | 0.4333 |
| 0.0848 | 3.15 | 64000 | 0.2338 | 0.8263 | 0.9740 | 0.9889 | 0.5161 | 0.8077 | 0.9814 | 0.9889 | 0.4528 |
| 0.0859 | 3.25 | 66000 | 0.2335 | 0.8365 | 0.9750 | 0.9889 | 0.5455 | 0.8108 | 0.9859 | 0.9889 | 0.4576 |
| 0.084 | 3.35 | 68000 | 0.2067 | 0.8343 | 0.9763 | 0.9889 | 0.5376 | 0.8148 | 0.9837 | 0.9889 | 0.4717 |
| 0.0837 | 3.45 | 70000 | 0.2516 | 0.8249 | 0.9746 | 0.9889 | 0.5111 | 0.8097 | 0.9803 | 0.9889 | 0.46 |
| 0.0809 | 3.54 | 72000 | 0.2948 | 0.8258 | 0.9728 | 0.9944 | 0.5102 | 0.8045 | 0.9824 | 1.0 | 0.4310 |
| 0.0833 | 3.64 | 74000 | 0.2457 | 0.8494 | 0.9744 | 0.9944 | 0.5794 | 0.8173 | 0.9893 | 1.0 | 0.4627 |
| 0.0796 | 3.74 | 76000 | 0.3188 | 0.8277 | 0.9733 | 0.9889 | 0.5208 | 0.8059 | 0.9825 | 0.9889 | 0.4464 |
| 0.0821 | 3.84 | 78000 | 0.2642 | 0.8343 | 0.9714 | 0.9944 | 0.5370 | 0.8045 | 0.9870 | 1.0 | 0.4265 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.2 |
Graphcore/bert-base-uncased | 3c0d71623fcab38812d42fa8a695cfa3efedb4c5 | 2022-05-25T18:31:01.000Z | [
"pytorch",
"bert",
"dataset:Graphcore/wikipedia-bert-128",
"dataset:Graphcore/wikipedia-bert-512",
"arxiv:1904.00962",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | Graphcore | null | Graphcore/bert-base-uncased | 284 | null | transformers | 3,101 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- Graphcore/wikipedia-bert-128
- Graphcore/wikipedia-bert-512
model-index:
- name: Graphcore/bert-base-uncased
results: []
---
# Graphcore/bert-base-uncased
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Intended uses & limitations
This model is a pre-trained BERT-Base trained in two phases on the [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) and [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) datasets.
It was trained on a Graphcore IPU-POD16 using [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore).
Graphcore and Hugging Face are working together to make training of Transformer models on IPUs fast and easy. Learn more about how to take advantage of the power of Graphcore IPUs to train Transformers models at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
## Training and evaluation data
Trained on wikipedia datasets:
- [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128)
- [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512)
## Fine-tuning with these weights
These weights can be used in either `transformers` or [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore).
For example, to fine-tune the GLUE task SST2 with `optimum-graphcore` you can do:
```
export TOKENIZERS_PARALLELISM=true
python examples/text-classification/run_glue.py \
--model_name_or_path bert-base-uncased \
--ipu_config_name Graphcore/bert-base-ipu \
--task_name sst2 \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 32 \
--pod_type pod4 \
--learning_rate 2e-5 \
--lr_scheduler_type linear \
--warmup_ratio 0.25 \
--num_train_epochs 3 \
--seed 1984 \
--save_steps -1 \
--dataloader_num_workers 64 \
--dataloader_drop_last \
--overwrite_output_dir \
--output_dir /tmp/sst2
```
## Training procedure
Trained MLM and NSP pre-training scheme from [Large Batch Optimization for Deep Learning: Training BERT in 76 minutes](https://arxiv.org/abs/1904.00962).
Trained on a Graphcore IPU-POD16 using [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore).
It was trained with the IPUConfig [Graphcore/bert-base-ipu](https://huggingface.co/Graphcore/bert-base-ipu/).
Command lines:
Phase 1:
```
python examples/language-modeling/run_pretraining.py \
--config_name bert-base-uncased \
--tokenizer_name bert-base-uncased \
--ipu_config_name Graphcore/bert-base-ipu \
--dataset_name Graphcore/wikipedia-bert-128 \
--do_train \
--logging_steps 5 \
--max_seq_length 128 \
--max_steps 10500 \
--is_already_preprocessed \
--dataloader_num_workers 64 \
--dataloader_mode async_rebatched \
--lamb \
--lamb_no_bias_correction \
--per_device_train_batch_size 32 \
--gradient_accumulation_steps 512 \
--learning_rate 0.006 \
--lr_scheduler_type linear \
--loss_scaling 16384 \
--weight_decay 0.01 \
--warmup_ratio 0.28 \
--save_steps 100 \
--config_overrides "layer_norm_eps=0.001" \
--ipu_config_overrides "device_iterations=1" \
--output_dir output-pretrain-bert-base-phase1
```
Phase 2:
```
python examples/language-modeling/run_pretraining.py \
--config_name bert-base-uncased \
--tokenizer_name bert-base-uncased \
--ipu_config_name Graphcore/bert-base-ipu \
--dataset_name Graphcore/wikipedia-bert-512 \
--model_name_or_path ./output-pretrain-bert-base-phase1 \
--do_train \
--logging_steps 5 \
--max_seq_length 512 \
--max_steps 2038 \
--is_already_preprocessed \
--dataloader_num_workers 128 \
--dataloader_mode async_rebatched \
--lamb \
--lamb_no_bias_correction \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 512 \
--learning_rate 0.002828 \
--lr_scheduler_type linear \
--loss_scaling 128.0 \
--weight_decay 0.01 \
--warmup_ratio 0.128 \
--config_overrides "layer_norm_eps=0.001" \
--ipu_config_overrides "device_iterations=1,embedding_serialization_factor=2,matmul_proportion=0.22" \
--output_dir output-pretrain-bert-base-phase2
```
### Training hyperparameters
The following hyperparameters were used during phase 1 training:
- learning_rate: 0.006
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 512
- total_train_batch_size: 65536
- total_eval_batch_size: 128
- optimizer: LAMB
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.28
- training_steps: 10500
- training precision: Mixed Precision
The following hyperparameters were used during phase 2 training:
- learning_rate: 0.002828
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 512
- total_train_batch_size: 16384
- total_eval_batch_size: 128
- optimizer: LAMB
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.128
- training_steps: 2038
- training precision: Mixed Precision
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 1.18.3.dev0
- Tokenizers 0.10.3 |
aishanisingh/DialoGPT-small-harrypotter | 9282686d6968e9aa3dc4703542b07f29b2e9e49b | 2022-02-12T11:03:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aishanisingh | null | aishanisingh/DialoGPT-small-harrypotter | 283 | null | transformers | 3,102 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
asapp/sew-d-tiny-100k | 525fe76004d8066bad550ca48c6cbc10dcdbb21c | 2021-10-28T14:06:38.000Z | [
"pytorch",
"sew-d",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | asapp | null | asapp/sew-d-tiny-100k | 283 | null | transformers | 3,103 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-tiny
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`. |
chaitrabhat/DialoGPT-small-rick | 5bfa6ef6dcadf13db5e4c31b554fc8d3074bd395 | 2021-11-05T11:30:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | chaitrabhat | null | chaitrabhat/DialoGPT-small-rick | 283 | null | transformers | 3,104 | ---
tags:
- conversational
---
# Rick DialoGPT model |
iarfmoose/roberta-base-bulgarian | 8990d3001cf524e8e9120f3e4627c429f11f1ae3 | 2021-05-20T16:50:24.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"bg",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
] | fill-mask | false | iarfmoose | null | iarfmoose/roberta-base-bulgarian | 283 | 1 | transformers | 3,105 | ---
language: bg
---
# RoBERTa-base-bulgarian
The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a version of [RoBERTa-base](https://huggingface.co/roberta-base) pretrained on Bulgarian text.
## Intended uses
This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian.
## Limitations and bias
The training data is unfiltered text from the internet and may contain all sorts of biases.
## Training data
This model was trained on the following data:
- [bg_dedup from OSCAR](https://oscar-corpus.com/)
- [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
- [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian)
## Training procedure
The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing)
It was trained for 200k steps. The batch size was limited to 8 due to GPU memory limitations.
|
lanejm/DialoGPT-small-hagrid | 4cd872a99f3d6c003f1e7cc408876a1fef610a2c | 2021-08-30T20:32:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lanejm | null | lanejm/DialoGPT-small-hagrid | 283 | null | transformers | 3,106 | ---
tags:
- conversational
---
# Hagrid DailoGPT Model |
nev/dalle-mini-pytorch | 75ad36dad1a6f24f882ce41613c17c27430993b3 | 2022-07-03T08:22:24.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | nev | null | nev/dalle-mini-pytorch | 283 | 1 | transformers | 3,107 | The small DALLE-mini converted to PyTorch
[Colab](https://colab.research.google.com/drive/1Blh-hTfhyry-YvitH8A95Duzwtm17Xz-?usp=sharing) |
r3dhummingbird/DialoGPT-small-harrypotter | 6aefc62f75912fb17e9c171c885129cbf19d108f | 2021-08-08T19:02:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | r3dhummingbird | null | r3dhummingbird/DialoGPT-small-harrypotter | 283 | 1 | transformers | 3,108 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
sentence-transformers/msmarco-roberta-base-v2 | b23dfbc095c7726ffb381522e4132a46fdee7783 | 2022-06-16T00:24:02.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/msmarco-roberta-base-v2 | 283 | null | sentence-transformers | 3,109 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/msmarco-roberta-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/msmarco-roberta-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-roberta-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/msmarco-roberta-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-roberta-base-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
tner/twitter-roberta-base-dec2021-tweetner-2020-2021-continuous | 26b748712ee9288a94662b3db5a5202c9f053e0c | 2022-07-11T22:17:36.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/twitter-roberta-base-dec2021-tweetner-2020-2021-continuous | 283 | null | transformers | 3,110 | Entry not found |
Magolor/deepex-ranking-model | e793d26a74805413796e1e079791f6a22ac226db | 2021-09-16T05:25:14.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Magolor | null | Magolor/deepex-ranking-model | 282 | null | transformers | 3,111 | Entry not found |
gvs/wav2vec2-large-xlsr-malayalam | b073beb93616a83d26933e1aa98d9299a5f4af23 | 2021-07-06T05:44:26.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ml",
"dataset:Indic TTS Malayalam Speech Corpus",
"dataset:Openslr Malayalam Speech Corpus",
"dataset:SMC Malayalam Speech Corpus",
"dataset:IIIT-H Indic Speech Databases",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gvs | null | gvs/wav2vec2-large-xlsr-malayalam | 282 | null | transformers | 3,112 | ---
language: ml
datasets:
- Indic TTS Malayalam Speech Corpus
- Openslr Malayalam Speech Corpus
- SMC Malayalam Speech Corpus
- IIIT-H Indic Speech Databases
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Malayalam XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Test split of combined dataset using all datasets mentioned above
type: custom
args: ml
metrics:
- name: Test WER
type: wer
value: 28.43
---
# Wav2Vec2-Large-XLSR-53-ml
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on ml (Malayalam) using the [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The notebooks used to train model are available [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = <load-test-split-of-combined-dataset> # Details on loading this dataset in the evaluation section
processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"])
```
## Evaluation
The model can be evaluated as follows on the test data of combined custom dataset. For more details on dataset preparation, check the notebooks mentioned at the end of this file.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from datasets import load_dataset, load_metric
from pathlib import Path
# The custom dataset needs to be created using notebook mentioned at the end of this file
data_dir = Path('<path-to-custom-dataset>')
dataset_folders = {
'iiit': 'iiit_mal_abi',
'openslr': 'openslr',
'indic-tts': 'indic-tts-ml',
'msc-reviewed': 'msc-reviewed-speech-v1.0+20200825',
}
# Set directories for datasets
openslr_male_dir = data_dir / dataset_folders['openslr'] / 'male'
openslr_female_dir = data_dir / dataset_folders['openslr'] / 'female'
iiit_dir = data_dir / dataset_folders['iiit']
indic_tts_male_dir = data_dir / dataset_folders['indic-tts'] / 'male'
indic_tts_female_dir = data_dir / dataset_folders['indic-tts'] / 'female'
msc_reviewed_dir = data_dir / dataset_folders['msc-reviewed']
# Load the datasets
openslr_male = load_dataset("json", data_files=[f"{str(openslr_male_dir.absolute())}/sample_{i}.json" for i in range(2023)], split="train")
openslr_female = load_dataset("json", data_files=[f"{str(openslr_female_dir.absolute())}/sample_{i}.json" for i in range(2103)], split="train")
iiit = load_dataset("json", data_files=[f"{str(iiit_dir.absolute())}/sample_{i}.json" for i in range(1000)], split="train")
indic_tts_male = load_dataset("json", data_files=[f"{str(indic_tts_male_dir.absolute())}/sample_{i}.json" for i in range(5649)], split="train")
indic_tts_female = load_dataset("json", data_files=[f"{str(indic_tts_female_dir.absolute())}/sample_{i}.json" for i in range(2950)], split="train")
msc_reviewed = load_dataset("json", data_files=[f"{str(msc_reviewed_dir.absolute())}/sample_{i}.json" for i in range(1541)], split="train")
# Create test split as 20%, set random seed as well.
test_size = 0.2
random_seed=1
openslr_male_splits = openslr_male.train_test_split(test_size=test_size, seed=random_seed)
openslr_female_splits = openslr_female.train_test_split(test_size=test_size, seed=random_seed)
iiit_splits = iiit.train_test_split(test_size=test_size, seed=random_seed)
indic_tts_male_splits = indic_tts_male.train_test_split(test_size=test_size, seed=random_seed)
indic_tts_female_splits = indic_tts_female.train_test_split(test_size=test_size, seed=random_seed)
msc_reviewed_splits = msc_reviewed.train_test_split(test_size=test_size, seed=random_seed)
# Get combined test dataset
split_list = [openslr_male_splits, openslr_female_splits, indic_tts_male_splits, indic_tts_female_splits, msc_reviewed_splits, iiit_splits]
test_dataset = datasets.concatenate_datasets([split['test'] for split in split_list)
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model.to("cuda")
resamplers = {
48000: torchaudio.transforms.Resample(48_000, 16_000),
}
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�Utrnle\\\\_]'
unicode_ignore_regex = r'[\\\\u200e]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
batch["sentence"] = re.sub(unicode_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
# Resample if its not in 16kHz
if sampling_rate != 16000:
batch["speech"] = resamplers[sampling_rate](speech_array).squeeze().numpy()
else:
batch["speech"] = speech_array.squeeze().numpy()
# If more than one dimension is present, pick first one
if batch["speech"].ndim > 1:
batch["speech"] = batch["speech"][0]
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (WER)**: 28.43 %
## Training
A combined dataset was created using [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The datasets were downloaded and was converted to HF Dataset format using [this notebook](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/make_hf_dataset.ipynb)
The notebook used for training and evaluation can be found [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/fine-tune-xlsr-wav2vec2-on-malayalam-asr-with-transformers_v2.ipynb) |
Chae/botman | 18d7ba38a24f78e491acdc64421bf8db0bd911cb | 2021-12-16T22:54:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Chae | null | Chae/botman | 281 | null | transformers | 3,113 | ---
tags:
- conversational
---
# Lego Batman DialoGPT Model |
Dawit/DialogGPT-small-ironman | 590a305bdf1d6dab66d71d47e9f1b56bc3b0622d | 2021-10-04T22:37:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Dawit | null | Dawit/DialogGPT-small-ironman | 281 | null | transformers | 3,114 | ---
tags:
- conversational
---
# Iron Man DialoGPT Model |
TransQuest/siamesetransquest-da-en_zh-wiki | 49e26e32d9f3715e73be25197c55e57514208d25 | 2021-06-04T08:09:52.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"en-zh",
"transformers",
"Quality Estimation",
"siamesetransquest",
"da",
"license:apache-2.0"
] | feature-extraction | false | TransQuest | null | TransQuest/siamesetransquest-da-en_zh-wiki | 281 | null | transformers | 3,115 | ---
language: en-zh
tags:
- Quality Estimation
- siamesetransquest
- da
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.siamesetransquest.run_model import SiameseTransQuestModel
model = SiameseTransQuestModel("TransQuest/siamesetransquest-da-en_zh-wiki")
predictions = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
google/t5-3b-ssm-nq | 41e4614a9cc7c70616d5af7a172be534e63763a8 | 2020-12-07T08:40:21.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-3b-ssm-nq | 281 | null | transformers | 3,116 | ---
language: en
datasets:
- c4
- wikipedia
- natural_questions
pipeline_tag: text2text-generation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-small|https://huggingface.co/google/t5-small-ssm-nq|25.5|
|T5-large|https://huggingface.co/google/t5-large-ssm-nq|30.4|
|T5-xl|https://huggingface.co/google/t5-xl-ssm-nq|35.6|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nq|37.9|
|**T5-3b**|**https://huggingface.co/google/t5-3b-ssm-nq**|**33.2**|
|T5-11b|https://huggingface.co/google/t5-11b-ssm-nq|36.6|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-3b-ssm-nq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-3b-ssm-nq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
JazzyLucas/DialoGPT-small-TonyStark | 5185907181726a6825b61150bbc8b98adf11a75a | 2022-06-28T20:20:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | JazzyLucas | null | JazzyLucas/DialoGPT-small-TonyStark | 281 | null | transformers | 3,117 | ---
tags:
- conversational
---
# Tony Stark DialoGPT Model |
Rush11/DialoGPT-small-HarryPotter | 799549795ef573de8012f54f2496e1553d62c777 | 2021-11-11T13:36:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Rush11 | null | Rush11/DialoGPT-small-HarryPotter | 280 | null | transformers | 3,118 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
SilentMyuth/stableben | 04b2b427113cf0b59626e75dca7e28a6b8cccb3e | 2021-07-21T21:05:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | SilentMyuth | null | SilentMyuth/stableben | 280 | null | transformers | 3,119 | ---
tags:
- conversational
---
# My Awesome Model |
allenai/hvila-block-layoutlm-finetuned-docbank | 259c0cd08caaf1b0abefd447986cb905fbd06a23 | 2021-09-27T22:57:29.000Z | [
"pytorch",
"hierarchical_model",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | allenai | null | allenai/hvila-block-layoutlm-finetuned-docbank | 280 | null | transformers | 3,120 | Entry not found |
Theivaprakasham/layoutlmv3-finetuned-invoice | 817177117100efab28f4ecf162b1b1241df756cb | 2022-06-07T07:35:54.000Z | [
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"dataset:invoice",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | Theivaprakasham | null | Theivaprakasham/layoutlmv3-finetuned-invoice | 280 | null | transformers | 3,121 | ---
tags:
- generated_from_trainer
datasets:
- invoice
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-invoice
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: Invoice
type: invoice
args: invoice
metrics:
- name: Precision
type: precision
value: 1.0
- name: Recall
type: recall
value: 1.0
- name: F1
type: f1
value: 1.0
- name: Accuracy
type: accuracy
value: 1.0
---
# LayoutLM-v3 model fine-tuned on invoice dataset
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the invoice dataset.
We use Microsoft’s LayoutLMv3 trained on Invoice Dataset to predict the Biller Name, Biller Address, Biller post_code, Due_date, GST, Invoice_date, Invoice_number, Subtotal and Total. To use it, simply upload an image or use the example image below. Results will show up in a few seconds.
It achieves the following results on the evaluation set:
- Loss: 0.0012
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
All the training codes are available from the below GitHub link.
https://github.com/Theivaprakasham/layoutlmv3
The model can be evaluated at the HuggingFace Spaces link:
https://huggingface.co/spaces/Theivaprakasham/layoutlmv3_invoice
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.0 | 100 | 0.0878 | 0.968 | 0.9817 | 0.9748 | 0.9966 |
| No log | 4.0 | 200 | 0.0241 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 6.0 | 300 | 0.0186 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 8.0 | 400 | 0.0184 | 0.9854 | 0.9574 | 0.9712 | 0.9956 |
| 0.1308 | 10.0 | 500 | 0.0121 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1308 | 12.0 | 600 | 0.0076 | 0.9939 | 0.9878 | 0.9908 | 0.9987 |
| 0.1308 | 14.0 | 700 | 0.0047 | 1.0 | 0.9959 | 0.9980 | 0.9996 |
| 0.1308 | 16.0 | 800 | 0.0036 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.1308 | 18.0 | 900 | 0.0045 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0069 | 20.0 | 1000 | 0.0043 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0069 | 22.0 | 1100 | 0.0016 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0069 | 24.0 | 1200 | 0.0015 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0069 | 26.0 | 1300 | 0.0014 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0069 | 28.0 | 1400 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0026 | 30.0 | 1500 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0026 | 32.0 | 1600 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0026 | 34.0 | 1700 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0026 | 36.0 | 1800 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0026 | 38.0 | 1900 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.002 | 40.0 | 2000 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
EEE/DialoGPT-small-aang | 7763800f9df9dada1a6d6ad050e02804aa7f4f9a | 2021-11-13T07:28:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | EEE | null | EEE/DialoGPT-small-aang | 279 | null | transformers | 3,122 | ---
tags:
- conversational
---
# Aang DialoGPT Model |
aashutosh2102/DialoGPT-smalll-harrypotter | f21fd4183f9d581313901bc33f6ab4afc48153e8 | 2021-08-26T19:30:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aashutosh2102 | null | aashutosh2102/DialoGPT-smalll-harrypotter | 279 | 1 | transformers | 3,123 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
facebook/convnext-small-224 | 1f83cacb36771bc46877595aecf6f6c7a9f941f9 | 2022-02-26T12:17:32.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-small-224 | 279 | null | transformers | 3,124 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (large-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-small-224")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-small-224")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
m3hrdadfi/wav2vec2-large-xlsr-turkish | 8699cf317b8f9a834ddf192608324ae5bbd191f0 | 2021-07-06T11:07:44.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-large-xlsr-turkish | 279 | 2 | transformers | 3,125 | ---
language: tr
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- label: Common Voice sample 1378
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-turkish/resolve/main/sample1378.flac
- label: Common Voice sample 1589
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-turkish/resolve/main/sample1589.flac
model-index:
- name: XLSR Wav2Vec2 Turkish by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 27.51
---
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Turkish using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import numpy as np
import re
import string
import IPython.display as ipd
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"',
"“", "%", "‘", "�", "–", "…", "_", "”", '“', '„'
]
chars_to_mapping = {
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = text.replace("\u0307", " ").strip()
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish").to(device)
dataset = load_dataset("common_voice", "et", split="test[:1%]")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
max_items = np.random.randint(0, len(result), 10).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: ülke şu anda iki federasyona üye
predicted: ülke şu anda iki federasyona üye
---
reference: foruma dört yüzde fazla kişi katıldı
predicted: soruma dört yüzden fazla kişi katıldı
---
reference: mobi altmış üç çalışanları da mutsuz
predicted: mobia haltmış üç çalışanları da mutsur
---
reference: kentin mali esnekliğinin düşük olduğu bildirildi
predicted: kentin mali esnekleğinin düşük olduğu bildirildi
---
reference: fouere iki ülkeyi sorunu abartmamaya çağırdı
predicted: foor iki ülkeyi soruna abartmamaya çanayordı
---
reference: o ülkeden herhangi bir tepki geldi mi
predicted: o ülkeden herhayın bir tepki geldi mi
---
reference: bunlara asla sırtımızı dönmeyeceğiz
predicted: bunlara asla sırtımızı dönmeyeceğiz
---
reference: sizi ayakta tutan nedir
predicted: sizi ayakta tutan nedir
---
reference: artık insanlar daha bireysel yaşıyor
predicted: artık insanlar daha bir eyselli yaşıyor
---
reference: her ikisi de diyaloga hazır olduğunu söylüyor
predicted: her ikisi de diyaloğa hazır olduğunu söylüyor
---
reference: merkez bankasının başlıca amacı düşük enflasyon
predicted: merkez bankasının başlrıca anatı güşükyen flasyon
---
reference: firefox
predicted: fair foks
---
reference: ülke halkı çok misafirsever ve dışa dönük
predicted: ülke halktı çok isatirtever ve dışa dönük
---
reference: ancak kamuoyu bu durumu pek de affetmiyor
predicted: ancak kamuonyulgukirmu pek deafıf etmiyor
---
reference: i ki madende iki bin beş yüzden fazla kişi çalışıyor
predicted: i ki madende iki bin beş yüzden fazla kişi çalışıyor
---
reference: sunnyside park dışarıdan oldukça iyi görünüyor
predicted: sani sahip park dışarıdan oldukça iyi görünüyor
---
reference: büyük ödül on beş bin avro
predicted: büyük ödül on beş bin avro
---
reference: köyümdeki camiler depoya dönüştürüldü
predicted: küyümdeki camiler depoya dönüştürüldü
---
reference: maç oldukça diplomatik bir sonuçla birbir bitti
predicted: maç oldukça diplomatik bir sonuçla bir birbitti
---
reference: kuşların ikisi de karantinada öldüler
predicted: kuşların ikiste karantinada özdüler
---
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import numpy as np
import re
import string
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"',
"“", "%", "‘", "�", "–", "…", "_", "”", '“', '„'
]
chars_to_mapping = {
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
"\u0307": " "
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = text.replace("\u0307", " ").strip()
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
text = re.sub(" +", " ", text)
text = text.strip() + " "
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish").to(device)
dataset = load_dataset("common_voice", "tr", split="test")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
]
**Test Result**:
- WER: 27.51%
## Training & Report
The Common Voice `train`, `validation` datasets were used for training.
You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_turkish/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Turkish--Vmlldzo1Njc1MDc?accessToken=02vm5cwbi7d342vyt7h9w9859zex0enltdmjoreyjt3bd5qwv0vs0g3u93iv92q0)
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb) |
ccdv/lsg-bart-base-4096-pubmed | 690c90499c848071d8771a8e3fb658b1d3c84a1d | 2022-07-25T05:30:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:scientific_papers",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ccdv | null | ccdv/lsg-bart-base-4096-pubmed | 279 | 1 | transformers | 3,126 | ---
language:
- en
tags:
- summarization
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-4096-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-pubmed", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-pubmed", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-4096-pubmed
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the [scientific_papers pubmed](https://huggingface.co/datasets/scientific_papers) dataset. \
It achieves the following results on the test set:
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 256 | 0 | 768 | 47.37 | 21.74 | 28.59 | 43.67 |
| 4096 | Local | 128 | 0 | 384 | 47.02 | 21.33 | 28.34 | 43.31 |
| 4096 | Pooling | 128 | 4 | 644 | 47.11 | 21.42 | 28.43 | 43.40 |
| 4096 | Stride | 128 | 4 | 644 | 47.16 | 21.49 | 28.38 | 43.44 |
| 4096 | Block Stride | 128 | 4 | 644 | 47.13 | 21.46 | 28.39 | 43.42 |
| 4096 | Norm | 128 | 4 | 644 | 47.09 | 21.44 | 28.40 | 43.36 |
| 4096 | LSH | 128 | 4 | 644 | 47.11 | 21.41 | 28.41 | 43.42 |
With smaller block size (lower ressources):
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 64 | 0 | 192 | 45.74 | 20.26 | 27.51 | 41.99 |
| 4096 | Local | 32 | 0 | 96 | 42.69 | 17.83 | 25.62 | 38.89 |
| 4096 | Pooling | 32 | 4 | 160 | 44.60 | 19.35 | 26.83 | 40.85 |
| 4096 | Stride | 32 | 4 | 160 | 45.52 | 20.07 | 27.39 | 41.75 |
| 4096 | Block Stride | 32 | 4 | 160 | 45.30 | 19.89 | 27.22 | 41.54 |
| 4096 | Norm | 32 | 4 | 160 | 44.30 | 19.05 | 26.57 | 40.47 |
| 4096 | LSH | 32 | 4 | 160 | 44.53 | 19.27 | 26.84 | 40.74 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: scientific_papers
- dataset_config_name: pubmed
- eval_batch_size: 8
- eval_samples: 6658
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 512
- min_length: 128
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
nlp-waseda/roberta-large-japanese-seq512 | 40948ff39bb7d4ae28aa2b9aed31ec33a5483d09 | 2022-06-13T10:10:39.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | nlp-waseda | null | nlp-waseda/roberta-large-japanese-seq512 | 279 | 1 | transformers | 3,127 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
mask_token: "[MASK]"
widget:
- text: "早稲田 大学 で 自然 言語 処理 を [MASK] する 。"
---
# nlp-waseda/roberta-large-japanese-seq512
## Model description
This is a Japanese RoBERTa large model pretrained on Japanese Wikipedia and the Japanese portion of CC-100 with the maximum sequence length of 512.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-large-japanese-seq512")
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-large-japanese-seq512")
sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can fine-tune this model on downstream tasks.
## Tokenization
The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
## Vocabulary
The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
## Training procedure
This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100 from the checkpoint of [nlp-waseda/roberta-large-japanese](https://huggingface.co/nlp-waseda/roberta-large-japanese). It took a week using eight NVIDIA A100 GPUs.
The following hyperparameters were used during pretraining:
- learning_rate: 6e-5
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 4120 (max_seq_length=128), 4032 (max_seq_length=512)
- max_seq_length: 512
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-6
- lr_scheduler_type: linear
- training_steps: 670000 (max_seq_length=128) + 70000 (max_seq_length=512)
- warmup_steps: 10000
- mixed_precision_training: Native AMP
|
DecafNosebleed/DialoGPT-small-ScaraBot | 5b700a2928bbb0c00dfc46159ac52d5e7ea68a44 | 2021-12-27T22:12:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | DecafNosebleed | null | DecafNosebleed/DialoGPT-small-ScaraBot | 278 | null | transformers | 3,128 | ---
tags:
- conversational
---
#Scaramouche DialoGPT Model |
imxly/t5-pegasus-small | fafe710bf3039f39efa7b0278969561e838d478f | 2021-06-23T15:07:42.000Z | [
"pytorch",
"jax",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | imxly | null | imxly/t5-pegasus-small | 278 | 2 | transformers | 3,129 | Entry not found |
jhgan/ko-sbert-sts | ff20c24e4544a5def5b90d83e8268e8d46af6ada | 2021-12-27T12:56:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jhgan | null | jhgan/ko-sbert-sts | 278 | null | sentence-transformers | 3,130 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ko-sbert-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sbert-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sbert-sts')
model = AutoModel.from_pretrained('jhgan/ko-sbert-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
KorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 81.55
- Cosine Spearman: 81.23
- Euclidean Pearson: 79.94
- Euclidean Spearman: 79.79
- Manhattan Pearson: 79.90
- Manhattan Spearman: 79.75
- Dot Pearson: 76.02
- Dot Spearman: 75.31
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 719 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv
preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)
|
julianolf/DialoGPT-small-harrypotter | af7b7888b8e8081c86e58efa9d33a73fe658111e | 2021-08-28T17:47:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | julianolf | null | julianolf/DialoGPT-small-harrypotter | 278 | null | transformers | 3,131 | ---
tags:
- conversational
---
# Harry Potter DialogGPT Model |
monologg/kobert-lm | ce972e68502e016cef607a275cddffe684633484 | 2021-05-19T23:51:48.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | monologg | null | monologg/kobert-lm | 278 | null | transformers | 3,132 | Entry not found |
mrm8488/diltilgpt2-finetuned-bookcopus-10 | 7c4a124633be754988c81f16835eb9605bdbf3e7 | 2021-05-23T10:19:39.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mrm8488 | null | mrm8488/diltilgpt2-finetuned-bookcopus-10 | 278 | 3 | transformers | 3,133 | Entry not found |
persiannlp/mt5-base-parsinlu-opus-translation_fa_en | b7e4fc1aaf34c0679be7127e57e9e1b12829d80f | 2021-09-23T16:19:57.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"machine-translation",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-base-parsinlu-opus-translation_fa_en | 278 | null | transformers | 3,134 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
which should give the following:
```
['the admiration of God, which is the Lord of the world.']
['At the Ford Park, the Crawford Park stands on a vase;']
['He thanked all the bloggers, the organizations, and the people who supported him']
['similar to the year 2001, the economy of ammonia in the United States in the']
['I want to follow the computer experts on social networks, what is the unsolved problem in']
```
which should give the following:
```
['Adoration of God, the Lord of the world.']
['At the High End of the Park, Conrad stands on a vase preaching;']
['She thanked all the bloggers, organizations, and men who had supported her.']
['In 2000, the lack of water ammonia in the United States was almost']
['I want to follow the computer science doctorate on social networks. What is the unsolved challenge']
```
Which should produce the following:
```
['the praise of God, the Lord of the world.']
['At the Hyde Park Corner, Carpenter is preaching on a vase;']
['He thanked all the bloggers, organizations, and people who had supported him.']
['Similarly in 2001, the production of waterless ammonia in the United States was']
['I want to pursue my degree in Computer Science on social networks, what is the']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
QCRI/bert-base-multilingual-cased-pos-english | 0b39becb9520965fefc653a7d67dd4accf8cd273 | 2022-06-13T09:03:43.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"transformers",
"part-of-speech",
"finetuned",
"license:cc-by-nc-3.0",
"autotrain_compatible"
] | token-classification | false | QCRI | null | QCRI/bert-base-multilingual-cased-pos-english | 278 | null | transformers | 3,135 | ---
language:
- en
tags:
- part-of-speech
- finetuned
license: cc-by-nc-3.0
---
# BERT-base-multilingual-cased finetuned for Part-of-Speech tagging
This is a multilingual BERT model fine tuned for part-of-speech tagging for English. It is trained using the Penn TreeBank (Marcus et al., 1993) and achieves an F1-score of 96.69.
## Usage
A *transformers* pipeline can be used to run the model:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, TokenClassificationPipeline
model_name = "QCRI/bert-base-multilingual-cased-pos-english"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
pipeline = TokenClassificationPipeline(model, tokenizer)
outputs = pipeline("A test example")
print(outputs)
```
## Citation
This model was used for all the part-of-speech tagging based results in *Analyzing Encoded Concepts in Transformer Language Models*, published at NAACL'22. If you find this model useful for your own work, please use the following citation:
```bib
@inproceedings{sajjad-NAACL,
title={Analyzing Encoded Concepts in Transformer Language Models},
author={Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Firoj Alam, Abdul Rafae Khan and Jia Xu},
booktitle={North American Chapter of the Association of Computational Linguistics: Human Language Technologies (NAACL)},
series={NAACL~'22},
year={2022},
address={Seattle}
}
``` |
Theivaprakasham/layoutlmv3-finetuned-wildreceipt | 53cff4056eb698bba9d262861580793844946349 | 2022-06-11T09:14:40.000Z | [
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"dataset:wild_receipt",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | Theivaprakasham | null | Theivaprakasham/layoutlmv3-finetuned-wildreceipt | 278 | 1 | transformers | 3,136 | ---
tags:
- generated_from_trainer
datasets:
- wild_receipt
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-wildreceipt
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wild_receipt
type: wild_receipt
args: WildReceipt
metrics:
- name: Precision
type: precision
value: 0.877212237618329
- name: Recall
type: recall
value: 0.8798678959680749
- name: F1
type: f1
value: 0.8785380599065679
- name: Accuracy
type: accuracy
value: 0.9249204782274871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-wildreceipt
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the wild_receipt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3108
- Precision: 0.8772
- Recall: 0.8799
- F1: 0.8785
- Accuracy: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
The WildReceipt dataset consists of 1740 receipt images, and contains 25 key information categories, and a total of about 69000 text boxes. 1268 and 472 images are used for training and testing respectively to train the LayoutLMv3 model for Key Information Extraction.
## Training procedure
The training code: https://github.com/Theivaprakasham/layoutlmv3/blob/main/training_codes/LayoutLMv3_training_WildReceipts_dataset.ipynb
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.32 | 100 | 1.3143 | 0.6709 | 0.2679 | 0.3829 | 0.6700 |
| No log | 0.63 | 200 | 0.8814 | 0.6478 | 0.5195 | 0.5766 | 0.7786 |
| No log | 0.95 | 300 | 0.6568 | 0.7205 | 0.6491 | 0.6829 | 0.8303 |
| No log | 1.26 | 400 | 0.5618 | 0.7544 | 0.7072 | 0.7300 | 0.8519 |
| 1.0284 | 1.58 | 500 | 0.5003 | 0.7802 | 0.7566 | 0.7682 | 0.8687 |
| 1.0284 | 1.89 | 600 | 0.4454 | 0.7941 | 0.7679 | 0.7807 | 0.8748 |
| 1.0284 | 2.21 | 700 | 0.4314 | 0.8142 | 0.7928 | 0.8033 | 0.8852 |
| 1.0284 | 2.52 | 800 | 0.3870 | 0.8172 | 0.8200 | 0.8186 | 0.8953 |
| 1.0284 | 2.84 | 900 | 0.3629 | 0.8288 | 0.8369 | 0.8329 | 0.9025 |
| 0.4167 | 3.15 | 1000 | 0.3537 | 0.8540 | 0.8200 | 0.8366 | 0.9052 |
| 0.4167 | 3.47 | 1100 | 0.3383 | 0.8438 | 0.8285 | 0.8361 | 0.9063 |
| 0.4167 | 3.79 | 1200 | 0.3403 | 0.8297 | 0.8493 | 0.8394 | 0.9062 |
| 0.4167 | 4.1 | 1300 | 0.3271 | 0.8428 | 0.8545 | 0.8487 | 0.9110 |
| 0.4167 | 4.42 | 1400 | 0.3182 | 0.8491 | 0.8518 | 0.8504 | 0.9131 |
| 0.2766 | 4.73 | 1500 | 0.3111 | 0.8491 | 0.8539 | 0.8515 | 0.9129 |
| 0.2766 | 5.05 | 1600 | 0.3177 | 0.8397 | 0.8620 | 0.8507 | 0.9124 |
| 0.2766 | 5.36 | 1700 | 0.3091 | 0.8676 | 0.8548 | 0.8612 | 0.9191 |
| 0.2766 | 5.68 | 1800 | 0.3080 | 0.8508 | 0.8645 | 0.8576 | 0.9162 |
| 0.2766 | 5.99 | 1900 | 0.3059 | 0.8492 | 0.8662 | 0.8576 | 0.9163 |
| 0.2114 | 6.31 | 2000 | 0.3184 | 0.8536 | 0.8657 | 0.8596 | 0.9147 |
| 0.2114 | 6.62 | 2100 | 0.3161 | 0.8583 | 0.8713 | 0.8648 | 0.9184 |
| 0.2114 | 6.94 | 2200 | 0.3055 | 0.8707 | 0.8682 | 0.8694 | 0.9220 |
| 0.2114 | 7.26 | 2300 | 0.3004 | 0.8689 | 0.8745 | 0.8717 | 0.9219 |
| 0.2114 | 7.57 | 2400 | 0.3111 | 0.8701 | 0.8720 | 0.8711 | 0.9211 |
| 0.174 | 7.89 | 2500 | 0.3130 | 0.8599 | 0.8741 | 0.8669 | 0.9198 |
| 0.174 | 8.2 | 2600 | 0.3034 | 0.8661 | 0.8748 | 0.8704 | 0.9219 |
| 0.174 | 8.52 | 2700 | 0.3005 | 0.8799 | 0.8673 | 0.8736 | 0.9225 |
| 0.174 | 8.83 | 2800 | 0.3043 | 0.8687 | 0.8804 | 0.8745 | 0.9240 |
| 0.174 | 9.15 | 2900 | 0.3121 | 0.8776 | 0.8704 | 0.8740 | 0.9242 |
| 0.1412 | 9.46 | 3000 | 0.3131 | 0.8631 | 0.8755 | 0.8692 | 0.9204 |
| 0.1412 | 9.78 | 3100 | 0.3067 | 0.8715 | 0.8773 | 0.8744 | 0.9233 |
| 0.1412 | 10.09 | 3200 | 0.3021 | 0.8751 | 0.8812 | 0.8782 | 0.9248 |
| 0.1412 | 10.41 | 3300 | 0.3092 | 0.8651 | 0.8808 | 0.8729 | 0.9228 |
| 0.1412 | 10.73 | 3400 | 0.3084 | 0.8776 | 0.8749 | 0.8762 | 0.9237 |
| 0.1254 | 11.04 | 3500 | 0.3156 | 0.8738 | 0.8785 | 0.8761 | 0.9237 |
| 0.1254 | 11.36 | 3600 | 0.3131 | 0.8723 | 0.8818 | 0.8770 | 0.9244 |
| 0.1254 | 11.67 | 3700 | 0.3108 | 0.8778 | 0.8781 | 0.8780 | 0.9250 |
| 0.1254 | 11.99 | 3800 | 0.3097 | 0.8778 | 0.8771 | 0.8775 | 0.9239 |
| 0.1254 | 12.3 | 3900 | 0.3115 | 0.8785 | 0.8801 | 0.8793 | 0.9251 |
| 0.111 | 12.62 | 4000 | 0.3108 | 0.8772 | 0.8799 | 0.8785 | 0.9249 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
akshatpandeyme/DialoGPT-small-ParthivBot | e673574a4cd58286dcb39872bef118cf03b64333 | 2022-07-25T11:06:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | akshatpandeyme | null | akshatpandeyme/DialoGPT-small-ParthivBot | 278 | null | transformers | 3,137 | ---
tags:
- conversational
---
# ParthivBot |
MathiasVS/DialoGPT-small-RickAndMorty | 78e2b308af3adfd0d705f46140710f3ab069e82f | 2021-08-29T11:35:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MathiasVS | null | MathiasVS/DialoGPT-small-RickAndMorty | 277 | null | transformers | 3,138 | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
asapp/sew-tiny-100k | 7ff3f0b171e114f8be849557bdd7208f82cf7b41 | 2021-10-26T19:40:45.000Z | [
"pytorch",
"sew",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
] | feature-extraction | false | asapp | null | asapp/sew-tiny-100k | 277 | null | transformers | 3,139 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-tiny
[SEW by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`. |
bensuydam/CartmanBot | 08cc5e80f808e781293d5026eaefe8526fdb6984 | 2022-02-16T02:48:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | bensuydam | null | bensuydam/CartmanBot | 277 | null | transformers | 3,140 | ---
tags:
- conversational
---
#GPTCartman |
oliverqq/scibert-uncased-topics | 971f56ff2f269bbefc57162c9248def5c9e36b45 | 2021-05-20T02:13:36.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | oliverqq | null | oliverqq/scibert-uncased-topics | 277 | 1 | transformers | 3,141 | Entry not found |
transfaeries/Twilight-Sparkle-GPT | c91dd543f903cfb71372fc4af407d6d16d4ef1f1 | 2021-09-17T18:53:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | transfaeries | null | transfaeries/Twilight-Sparkle-GPT | 277 | null | transformers | 3,142 | ---
tags:
- conversational
---
# Twilight Model Medium 13 epochs |
kche0138/DialoGPT-medium-DIO | 0fbc05d01caaad3591e76f2e1c159807f6817f2c | 2021-09-02T06:09:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kche0138 | null | kche0138/DialoGPT-medium-DIO | 276 | null | transformers | 3,143 | ---
tags:
- conversational
---
# DIO DialoGPT Model |
nateraw/bert-base-uncased-imdb | 45fafe885ac78d58644337ef105f3c922c054e8c | 2021-05-20T01:19:33.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | nateraw | null | nateraw/bert-base-uncased-imdb | 276 | null | transformers | 3,144 | Entry not found |
sudoabrar/DialoGPT-small-dwight | 37d2eb662dc22decd2a54d29d08e104fa299aa8c | 2021-10-01T19:37:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sudoabrar | null | sudoabrar/DialoGPT-small-dwight | 276 | null | transformers | 3,145 | ---
tags:
- conversational
---
# Dwight DialoGPT Model
You can find the code [here](https://github.com/sudo-apt-Abrar/BearsandBeets) |
prprakash/DialoGPT-small-TonyStark | 7a1bb0e766fbe39ebd232d3d5145c550f4986e4d | 2022-06-22T06:01:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | prprakash | null | prprakash/DialoGPT-small-TonyStark | 276 | null | transformers | 3,146 | ---
tags:
- conversational
---
# Tony Stark DialoGPT Model |
DeepChem/ChemBERTa-5M-MTR | a642f2611383af9c37550b805ca0ecc279079914 | 2022-01-20T17:47:34.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | DeepChem | null | DeepChem/ChemBERTa-5M-MTR | 275 | null | transformers | 3,147 | Entry not found |
Kai0857/DialoGPT-small-harrypotter | 1573a7e97dbecb29e7e9650e9c9a50713a9c0ecc | 2021-08-31T05:06:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kai0857 | null | Kai0857/DialoGPT-small-harrypotter | 275 | null | transformers | 3,148 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Kryptone/monikAI-Unstable | efe45cbeefd80853b4579a06dce40f896aefc169 | 2021-10-03T05:18:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kryptone | null | Kryptone/monikAI-Unstable | 275 | null | transformers | 3,149 | ---
tags:
- conversational
---
# MoniKA unstable |
aydin/DialoGPT-medium-michael | 6cd2edc4000a6900cde6156c5e74b5c904767670 | 2021-07-02T06:42:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aydin | null | aydin/DialoGPT-medium-michael | 275 | null | transformers | 3,150 | ---
tags:
- conversational
---
# My Awesome Model |
huggingtweets/elonmusk | 0aa19bde11c196ec43c986ffb1dc16b3f8764186 | 2022-07-30T01:48:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/elonmusk | 275 | 2 | transformers | 3,151 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk</div>
<div style="text-align: center; font-size: 14px;">@elonmusk</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk.
| Data | Elon Musk |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 129 |
| Short tweets | 981 |
| Tweets kept | 2090 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1l8bo4vm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/iq9jbfok) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/iq9jbfok/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
indobenchmark/indobert-lite-base-p2 | e65a8f16078d5e4622e1df53204c2acd12e3074f | 2020-12-11T21:45:53.000Z | [
"pytorch",
"tf",
"albert",
"feature-extraction",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] | feature-extraction | false | indobenchmark | null | indobenchmark/indobert-lite-base-p2 | 275 | null | transformers | 3,152 | ---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT-Lite Base Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
m-newhauser/distilbert-political-tweets | b7c4530e8c44cf8dcf448a6e5e5e460df33f83bf | 2022-07-07T09:07:44.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"en",
"dataset:m-newhauser/senator-tweets",
"transformers",
"generated_from_keras_callback",
"license:lgpl-3.0"
] | text-classification | false | m-newhauser | null | m-newhauser/distilbert-political-tweets | 275 | 4 | transformers | 3,153 | ---
language:
- en
license: lgpl-3.0
library_name: transformers
tags:
- text-classification
- transformers
- pytorch
- generated_from_keras_callback
metrics:
- accuracy
- f1
datasets:
- m-newhauser/senator-tweets
widget:
- text: "This pandemic has shown us clearly the vulgarity of our healthcare system. Highest costs in the world, yet not enough nurses or doctors. Many millions uninsured, while insurance company profits soar. The struggle continues. Healthcare is a human right. Medicare for all."
example_title: "Bernie Sanders (D)"
- text: "Team Biden would rather fund the Ayatollah's Death to America regime than allow Americans to produce energy for our own domestic consumption."
example_title: "Ted Cruz (R)"
---
# distilbert-political-tweets 🗣 🇺🇸
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [m-newhauser/senator-tweets](https://huggingface.co/datasets/m-newhauser/senator-tweets) dataset, which contains all tweets made by United States senators during the first year of the Biden Administration.
It achieves the following results on the evaluation set:
* Accuracy: 0.9076
* F1: 0.9117
## Model description
The goal of this model is to classify short pieces of text as having either Democratic or Republican sentiment. The model was fine-tuned on 99,693 tweets (51.6% Democrat, 48.4% Republican) made by US senators in 2021.
Model accuracy may not hold up on pieces of text longer than a tweet.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: Adam
- training_precision: float32
- learning_rate = 5e-5
- num_epochs = 5
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
m3hrdadfi/hubert-base-greek-speech-emotion-recognition | 06f53faea1a01f62f4fc8c99e4aa6940b0cfe217 | 2021-06-17T16:05:44.000Z | [
"pytorch",
"hubert",
"el",
"dataset:aesdd",
"transformers",
"audio",
"speech",
"speech-emotion-recognition",
"license:apache-2.0"
] | null | false | m3hrdadfi | null | m3hrdadfi/hubert-base-greek-speech-emotion-recognition | 275 | null | transformers | 3,154 | ---
language: el
datasets:
- aesdd
tags:
- audio
- speech
- speech-emotion-recognition
license: apache-2.0
---
# Emotion Recognition in Greek (el) Speech using HuBERT
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
```bash
!git clone https://github.com/m3hrdadfi/soxan.git .
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
from src.models import Wav2Vec2ForSpeechClassification, HubertForSpeechClassification
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/hubert-base-greek-speech-emotion-recognition"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = HubertForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "/path/to/disgust.wav"
outputs = predict(path, sampling_rate)
```
```bash
[
{'Emotion': 'anger', 'Score': '0.0%'},
{'Emotion': 'disgust', 'Score': '99.2%'},
{'Emotion': 'fear', 'Score': '0.1%'},
{'Emotion': 'happiness', 'Score': '0.3%'},
{'Emotion': 'sadness', 'Score': '0.5%'}
]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| Emotions | precision | recall | f1-score | accuracy |
|:---------:|:---------:|:------:|:--------:|:--------:|
| anger | 1.00 | 0.92 | 0.96 | |
| disgust | 0.92 | 1.00 | 0.96 | |
| fear | 1.00 | 0.88 | 0.93 | |
| happiness | 0.96 | 0.92 | 0.94 | |
| sadness | 0.86 | 1.00 | 0.93 | |
| | | | Overal | 0.94 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues). |
Taramiko/Hoshiyo_Kojima | 0694ef06fcba3fef3256718f06b8491a2c7473b4 | 2022-03-02T02:03:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Taramiko | null | Taramiko/Hoshiyo_Kojima | 275 | null | transformers | 3,155 | ---
tags:
- conversational
---
# Hoshiyo Kojima DialoGPT Model |
hfl/chinese-pert-large-mrc | c74f3cbfcd73ef9af91c74607ed3fa39064b799a | 2022-05-05T08:43:53.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"zh",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | hfl | null | hfl/chinese-pert-large-mrc | 275 | 2 | transformers | 3,156 | ---
language:
- zh
license: "apache-2.0"
---
## A Chinese MRC model built on Chinese PERT-large
**Please use `BertForQuestionAnswering` to load this model!**
This is a Chinese machine reading comprehension (MRC) model built on PERT-large and fine-tuned on a mixture of Chinese MRC datasets.
PERT is a pre-trained model based on permuted language model (PerLM) to learn text semantic information in a self-supervised manner without introducing the mask tokens [MASK]. It yields competitive results on in tasks such as reading comprehension and sequence labeling.
Results on Chinese MRC datasets (EM/F1):
(We report the checkpoint that has the best AVG score)
| | CMRC 2018 Dev | DRCD Dev | SQuAD-Zen Dev (Answerable) | AVG |
| :-------: | :-----------: | :-------: | :------------------------: | :-------: |
| PERT-large | 73.5/90.8 | 91.2/95.7 | 63.0/79.3 | 75.9/88.6 |
Please visit our GitHub repo for more information: https://github.com/ymcui/PERT
You may also be interested in,
Chinese Minority Languages CINO: https://github.com/ymcui/Chinese-Minority-PLM
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
|
Capreolus/bert-base-msmarco | 779741b8d851512ca133126727ae7f091a2c3d01 | 2021-05-18T17:35:58.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"arxiv:2008.09093",
"transformers"
] | text-classification | false | Capreolus | null | Capreolus/bert-base-msmarco | 274 | null | transformers | 3,157 | # capreolus/bert-base-msmarco
## Model description
BERT-Base model (`google/bert_uncased_L-12_H-768_A-12`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model; see the [Capreolus BERT-MaxP implementation](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) for a usage example.
This corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_bert_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
|
rsd511/DialoGPT-small-house | 25633299065c348ff40ece4ee01e95da8a705952 | 2022-03-04T21:55:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rsd511 | null | rsd511/DialoGPT-small-house | 274 | null | transformers | 3,158 | ---
tags:
- conversational
---
# House Bot |
Chuah/DialoGPT-small-harrypotter | 5e6146df772de0ba98a8d72e2f7f9cfb91ccee88 | 2021-09-02T09:26:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Chuah | null | Chuah/DialoGPT-small-harrypotter | 273 | null | transformers | 3,159 | ---
tags:
- conversational
---
# Harry Potter DialoGPT MOdel |
Helsinki-NLP/opus-mt-pl-de | da4656379de70bc21e031a5eb53d25e5cc5f66b3 | 2021-09-10T14:01:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pl",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pl-de | 273 | null | transformers | 3,160 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pl-de
* source languages: pl
* target languages: de
* OPUS readme: [pl-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.de | 47.8 | 0.665 |
|
abhishek/autonlp-imdb_sentiment_classification-31154 | 35088dcf759378af4ccbed093534af88ae17e259 | 2021-05-20T12:46:38.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"transformers",
"autonlp"
] | text-classification | false | abhishek | null | abhishek/autonlp-imdb_sentiment_classification-31154 | 273 | null | transformers | 3,161 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 31154
## Validation Metrics
- Loss: 0.19292379915714264
- Accuracy: 0.9395
- Precision: 0.9569557080474111
- Recall: 0.9204
- AUC: 0.9851040399999998
- F1: 0.9383219492302988
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb_sentiment_classification-31154
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb_sentiment_classification-31154", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
textattack/distilbert-base-uncased-rotten-tomatoes | 2f33048a5a243b84f232a8643520d426e46fbd92 | 2020-07-06T16:36:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-uncased-rotten-tomatoes | 273 | null | transformers | 3,162 | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 3 epochs with a batch size of 128, a learning
rate of 1e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8395872420262664, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
batterydata/batterybert-cased-squad-v1 | f420944bc55b1d1d81112df9ca8609c33578693a | 2022-03-05T13:50:54.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"transformers",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | batterydata | null | batterydata/batterybert-cased-squad-v1 | 273 | null | transformers | 3,163 | ---
language: en
tags: question answering
license: apache-2.0
datasets:
- squad
- batterydata/battery-device-data-qa
metrics: squad
---
# BatteryBERT-cased for QA
**Language model:** batterybert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 4
base_LM_model = "batterybert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 81.54,
"f1": 89.16,
```
Evaluated on the battery device dataset.
```
"precision": 70.74,
"recall": 84.19,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
CenIA/albert-base-spanish | 29e7bc60a782e76b887409a47bd81da7077403dc | 2022-04-28T19:55:01.000Z | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | false | CenIA | null | CenIA/albert-base-spanish | 272 | 2 | transformers | 3,164 | ---
language:
- es
tags:
- albert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
---
# ALBERT Base Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0008838834765
- Batch Size: 960
- Warmup ratio: 0.00625
- Warmup steps: 53333.33333
- Goal steps: 8533333.333
- Total steps: 3650000
- Total training time (aprox): 70.4 days.
## Training loss
 |
Helsinki-NLP/opus-mt-es-it | 435897ff1e4d6f205b8364450c20d17be9434a44 | 2021-09-09T21:43:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-it | 272 | null | transformers | 3,165 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-it
* source languages: es
* target languages: it
* OPUS readme: [es-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-it/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-it/opus-2020-01-29.zip)
* test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-it/opus-2020-01-29.test.txt)
* test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-it/opus-2020-01-29.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.it | 55.9 | 0.751 |
|
MrDuckerino/DialoGPT-medium-Rick | 5e0ecc2a7b6cd3f7083b7a76d7a4525f839418cf | 2021-09-25T12:11:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MrDuckerino | null | MrDuckerino/DialoGPT-medium-Rick | 272 | null | transformers | 3,166 | ---
tags:
- conversational
---
#Rick DialoGPT model |
castorini/tct_colbert-v2-hnp-msmarco-r2 | 12eead8e8e11e4c0c21b1d7fb1bdc2ada8c29da0 | 2021-08-16T16:13:09.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | castorini | null | castorini/tct_colbert-v2-hnp-msmarco-r2 | 272 | null | transformers | 3,167 | This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper:
> Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_.
Specifically, this checkpoint is finetuned for MS MARCO-V2 passage ranking, and we use this checkpoint as our ``trained'' model for TREC DL 2021 submissions.
The initial checkpoint is from a previous one [tct_colbert-v2-hnp-msmarco](https://huggingface.co/castorini/tct_colbert-v2-hnp-msmarco) trained on [MS MARCO](https://github.com/microsoft/MSMARCO-Passage-Ranking).
For fine-tuning, we construct our training data for MS MARCO-V2 passage ranking using this [script](https://github.com/castorini/pyserini/blob/master/scripts/msmarco_v2/generate_train_triplet.py).
|
lucio/xls-r-uzbek-cv8 | 0741dd70064b314d215e6d91a644eae924df5472 | 2022-03-23T18:25:35.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"uz",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lucio | null | lucio/xls-r-uzbek-cv8 | 272 | 1 | transformers | 3,168 | ---
language:
- uz
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M Uzbek CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: uz
metrics:
- name: Test WER (with LM)
type: wer
value: 15.065
- name: Test CER (with LM)
type: cer
value: 3.077
- name: Test WER (no LM)
type: wer
value: 32.88
- name: Test CER (no LM)
type: cer
value: 6.53
---
# XLS-R-300M Uzbek CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UZ dataset.
It achieves the following results on the validation set:
- Loss: 0.3063
- Wer: 0.3852
- Cer: 0.0777
## Model description
For a description of the model architecture, see [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
The model vocabulary consists of the [Modern Latin alphabet for Uzbek](https://en.wikipedia.org/wiki/Uzbek_alphabet), with punctuation removed.
Note that the characters <‘> and <’> do not count as punctuation, as <‘> modifies \<o\> and \<g\>, and <’> indicates the glottal stop or a long vowel.
The decoder uses a kenlm language model built on common_voice text.
## Intended uses & limitations
This model is expected to be of some utility for low-fidelity use cases such as:
- Draft video captions
- Indexing of recorded broadcasts
The model is not reliable enough to use as a substitute for live captions for accessibility purposes, and it should not be used in a manner that would infringe the privacy of any of the contributors to the Common Voice dataset nor any other speakers.
## Training and evaluation data
The 50% of the `train` common voice official split was used as training data. The 50% of the official `dev` split was used as validation data, and the full `test` set was used for final evaluation of the model without LM, while the model with LM was evaluated only on 500 examples from the `test` set.
The kenlm language model was compiled from the target sentences of the train + other dataset splits.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.1401 | 3.25 | 500 | 3.1146 | 1.0 | 1.0 |
| 2.7484 | 6.49 | 1000 | 2.2842 | 1.0065 | 0.7069 |
| 1.0899 | 9.74 | 1500 | 0.5414 | 0.6125 | 0.1351 |
| 0.9465 | 12.99 | 2000 | 0.4566 | 0.5635 | 0.1223 |
| 0.8771 | 16.23 | 2500 | 0.4212 | 0.5366 | 0.1161 |
| 0.8346 | 19.48 | 3000 | 0.3994 | 0.5144 | 0.1102 |
| 0.8127 | 22.73 | 3500 | 0.3819 | 0.4944 | 0.1051 |
| 0.7833 | 25.97 | 4000 | 0.3705 | 0.4798 | 0.1011 |
| 0.7603 | 29.22 | 4500 | 0.3661 | 0.4704 | 0.0992 |
| 0.7424 | 32.47 | 5000 | 0.3529 | 0.4577 | 0.0957 |
| 0.7251 | 35.71 | 5500 | 0.3410 | 0.4473 | 0.0928 |
| 0.7106 | 38.96 | 6000 | 0.3401 | 0.4428 | 0.0919 |
| 0.7027 | 42.21 | 6500 | 0.3355 | 0.4353 | 0.0905 |
| 0.6927 | 45.45 | 7000 | 0.3308 | 0.4296 | 0.0885 |
| 0.6828 | 48.7 | 7500 | 0.3246 | 0.4204 | 0.0863 |
| 0.6706 | 51.95 | 8000 | 0.3250 | 0.4233 | 0.0868 |
| 0.6629 | 55.19 | 8500 | 0.3264 | 0.4159 | 0.0849 |
| 0.6556 | 58.44 | 9000 | 0.3213 | 0.4100 | 0.0835 |
| 0.6484 | 61.69 | 9500 | 0.3182 | 0.4124 | 0.0837 |
| 0.6407 | 64.93 | 10000 | 0.3171 | 0.4050 | 0.0825 |
| 0.6375 | 68.18 | 10500 | 0.3150 | 0.4039 | 0.0822 |
| 0.6363 | 71.43 | 11000 | 0.3129 | 0.3991 | 0.0810 |
| 0.6307 | 74.67 | 11500 | 0.3114 | 0.3986 | 0.0807 |
| 0.6232 | 77.92 | 12000 | 0.3103 | 0.3895 | 0.0790 |
| 0.6216 | 81.17 | 12500 | 0.3086 | 0.3891 | 0.0790 |
| 0.6174 | 84.41 | 13000 | 0.3082 | 0.3881 | 0.0785 |
| 0.6196 | 87.66 | 13500 | 0.3059 | 0.3875 | 0.0782 |
| 0.6174 | 90.91 | 14000 | 0.3084 | 0.3862 | 0.0780 |
| 0.6169 | 94.16 | 14500 | 0.3070 | 0.3860 | 0.0779 |
| 0.6166 | 97.4 | 15000 | 0.3066 | 0.3855 | 0.0778 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
smallbenchnlp/roberta-small | d5facd8e51c8b4e91bfec551f9d5966e503ec1ab | 2021-10-05T04:03:28.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smallbenchnlp | null | smallbenchnlp/roberta-small | 272 | 1 | transformers | 3,169 | Small-Bench NLP is a benchmark for small efficient neural language models trained on a single GPU. |
doc2query/msmarco-14langs-mt5-base-v1 | a6945f83a59e1e1c7e56846e9b94343659124838 | 2022-05-02T20:12:45.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"en",
"ar",
"zh",
"nl",
"fr",
"de",
"hi",
"in",
"it",
"ja",
"pt",
"ru",
"es",
"vi",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-14langs-mt5-base-v1 | 272 | 3 | transformers | 3,170 | ---
language:
- en
- ar
- zh
- nl
- fr
- de
- hi
- in
- it
- ja
- pt
- ru
- es
- vi
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
license: apache-2.0
---
# doc2query/msmarco-14langs-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It was trained on all 14 languages of [mMARCO dataset](https://github.com/unicamp-dl/mMARCO), i.e. you can input a passage in any of the 14 languages, and it will generate a query in the same language.
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-14langs-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 525k training steps on all 14 languages from [mMARCO dataset](https://github.com/unicamp-dl/mMARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
fujuta/DialoGPT-medium-HermioneGrander | f023ae8b21cc2b2419b26c35336dd96cf039bd2d | 2022-05-25T01:54:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | fujuta | null | fujuta/DialoGPT-medium-HermioneGrander | 272 | null | transformers | 3,171 | ---
tags:
- conversational
--- |
Geotrend/bert-base-th-cased | 607c7b08e70c5ec7b2e3e013f394f0743dd39ca3 | 2021-05-18T20:11:25.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"th",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-th-cased | 271 | 1 | transformers | 3,172 | ---
language: th
datasets: wikipedia
license: apache-2.0
---
# bert-base-th-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-th-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-th-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
MAUtastic/DialoGPT-medium-RickandMortyBot | 063f226ac8853c3fbd90cdc45b98c88c2202e041 | 2021-09-27T17:24:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MAUtastic | null | MAUtastic/DialoGPT-medium-RickandMortyBot | 271 | null | transformers | 3,173 | ---
tags:
- conversational
---
# Rick Morty DialoGPT Model |
Ninja5000/DialoGPT-medium-HarryPotter | 26e06380418ba54fec7b56d942afe36710d65a77 | 2022-02-20T14:51:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ninja5000 | null | Ninja5000/DialoGPT-medium-HarryPotter | 271 | null | transformers | 3,174 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
TransQuest/monotransquest-da-en_any | f15603a61d3417f071401c0478f07460d29175dd | 2021-06-03T19:01:53.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en-multilingual",
"transformers",
"Quality Estimation",
"monotransquest",
"DA",
"license:apache-2.0"
] | text-classification | false | TransQuest | null | TransQuest/monotransquest-da-en_any | 271 | null | transformers | 3,175 | ---
language: en-multilingual
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-en_any", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
clip-italian/clip-italian | ec24f9388c45cbc404a12d94038c623425f99a31 | 2021-11-30T08:32:46.000Z | [
"pytorch",
"jax",
"vision-text-dual-encoder",
"feature-extraction",
"it",
"dataset:wit",
"dataset:ctl/conceptualCaptions",
"dataset:mscoco-it",
"arxiv:2103.01913",
"arxiv:2103.00020",
"transformers",
"italian",
"bert",
"vit",
"vision"
] | feature-extraction | false | clip-italian | null | clip-italian/clip-italian | 271 | 8 | transformers | 3,176 | ---
language: it
license:
datasets:
- wit
- ctl/conceptualCaptions
- mscoco-it
tags:
- italian
- bert
- vit
- vision
---
# Italian CLIP
With a few tricks, we have been able to fine-tune a competitive Italian CLIP model with **only 1.4 million** training samples. Our Italian CLIP model is built upon the [Italian BERT](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) model provided by [dbmdz](https://huggingface.co/dbmdz) and the OpenAI [vision transformer](https://huggingface.co/openai/clip-vit-base-patch32).
Do you want to test our model right away? We got you covered! You just need to head to our [demo application](https://huggingface.co/spaces/clip-italian/clip-italian-demo).
The demo also contains all the details of the project, from training tricks to our most impressive results, and much more!
# Training data
We considered four main sources of data:
+ [WIT](https://github.com/google-research-datasets/wit) is an image-caption dataset collected from Wikipedia (see,
[Srinivasan et al., 2021](https://arxiv.org/pdf/2103.01913.pdf)).
+ [MSCOCO-IT](https://github.com/crux82/mscoco-it). This image-caption dataset comes from the work by [Scaiella et al., 2019](http://www.ai-lc.it/IJCoL/v5n2/IJCOL_5_2_3___scaiella_et_al.pdf).
+ [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/). This image-caption dataset comes from
the work by [Sharma et al., 2018](https://aclanthology.org/P18-1238.pdf).
+ [La Foto del Giorno](https://www.ilpost.it/foto-del-giorno/). This image-caption dataset is collected from [Il Post](https://www.ilpost.it/), a prominent Italian online newspaper.
We used better data augmentation, strategic training choices (we have way less data than the original CLIP paper), and backbone-freezing pre-training. For all the details on that, please refer to our [demo](https://huggingface.co/spaces/clip-italian/clip-italian-demo).
# Experiments
## Quantitative Evaluation
To better understand how well our clip-italian model works we run an experimental evaluation. Since this is the first clip-based model in Italian, we used the multilingual CLIP model as a comparison baseline.
### mCLIP
The multilingual CLIP (henceforth, mCLIP), is a model introduced by [Nils Reimers](https://www.sbert.net/docs/pretrained_models.html) in his
[sentence-transformer](https://www.sbert.net/index.html) library. mCLIP is based on a multilingual encoder
that was created through multilingual knowledge distillation (see [Reimers et al., 2020](https://aclanthology.org/2020.emnlp-main.365/)).
### Tasks
We selected two different tasks:
+ image-retrieval
+ zero-shot classification
### Reproducibiliy
Both experiments should be very easy to replicate, we share the two colab notebook we used to compute the two results
+ [Image Retrieval](https://colab.research.google.com/drive/1bLVwVKpAndpEDHqjzxVPr_9nGrSbuOQd?usp=sharing)
+ [ImageNet Zero Shot Evaluation](https://colab.research.google.com/drive/1zfWeVWY79XXH63Ci-pk8xxx3Vu_RRgW-?usp=sharing)
### Image Retrieval
This experiment is run against the MSCOCO-IT validation set (that we haven't used in training). Given in input
a caption, we search for the most similar image in the MSCOCO-IT validation set. As evaluation metrics
we use the MRR@K.
| MRR | CLIP-Italian | mCLIP |
| --------------- | ------------ |-------|
| MRR@1 | **0.3797** | 0.2874|
| MRR@5 | **0.5039** | 0.3957|
| MRR@10 | **0.5204** | 0.4129|
It is true that we used MSCOCO-IT in training, and this might give us an advantage. However the original CLIP model was trained
on 400million images (and some of them probably were from MSCOCO).
### Zero-shot image classification
This experiment replicates the original one run by OpenAI on zero-shot image classification on ImageNet.
To do this, we used DeepL to translate the image labels in ImageNet. We evaluate the models computing the accuracy at different levels.
| Accuracy | CLIP-Italian | mCLIP |
| --------------- | ------------ |-------|
| Accuracy@1 | **22.11** | 20.15 |
| Accuracy@5 | **43.69** | 36.57 |
| Accuracy@10 | **52.55** | 42.91 |
| Accuracy@100 | **81.08** | 67.11 |
Our results confirm that CLIP-Italian is very competitive and beats mCLIP on the two different task
we have been testing. Note, however, that our results are lower than those shown in the original OpenAI
paper (see, [Radford et al., 2021](https://arxiv.org/abs/2103.00020)). However, considering that our results are in line with those obtained by mCLIP we think that
the translated image labels might have had an impact on the final scores.
# Team members
- Federico Bianchi ([vinid](https://huggingface.co/vinid))
- Raphael Pisoni ([4rtemi5](https://huggingface.co/4rtemi5))
- Giuseppe Attanasio ([g8a9](https://huggingface.co/g8a9))
- Silvia Terragni ([silviatti](https://huggingface.co/silviatti))
- Dario Balestri ([D3Reo](https://huggingface.co/D3Reo))
- Gabriele Sarti ([gsarti](https://huggingface.co/gsarti))
- Sri Lakshmi ([srisweet](https://huggingface.co/srisweet)) |
minwhoo/bart-base-negative-claim-generation | 06fb04ad75c263b51c57e0fee9312a5570aa6788 | 2021-10-07T04:24:44.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:wikifactcheck",
"arxiv:2109.15107",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | minwhoo | null | minwhoo/bart-base-negative-claim-generation | 271 | 2 | transformers | 3,177 | ---
language:
- en
tags:
- text2text-generation
license: mit
datasets:
- wikifactcheck
widget:
- text: "Little Miss Sunshine was filmed over 30 days."
---
# BART base negative claim generation model
This is a BART-based model fine-tuned for negative claim generation. This model is used in the data augmentation process described in the paper [CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact Verification Models](https://arxiv.org/abs/2109.15107). The model has been fine-tuned using the parallel and opposing claims from WikiFactCheck-English dataset.
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = 'minwhoo/bart-base-negative-claim-generation'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
model.to('cuda' if torch.cuda.is_available() else 'cpu')
examples = [
"Little Miss Sunshine was filmed over 30 days.",
"Magic Johnson did not play for the Lakers.",
"Claire Danes is wedded to an actor from England."
]
batch = tokenizer(examples, max_length=1024, padding=True, truncation=True, return_tensors="pt")
out = model.generate(batch['input_ids'].to(model.device), num_beams=5)
negative_examples = tokenizer.batch_decode(out, skip_special_tokens=True)
print(negative_examples)
# ['Little Miss Sunshine was filmed less than 3 days.', 'Magic Johnson played for the Lakers.', 'Claire Danes is married to an actor from France.']
```
## Citation
```
@inproceedings{lee2021crossaug,
title={CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact Verification Models},
author={Minwoo Lee and Seungpil Won and Juae Kim and Hwanhee Lee and Cheoneum Park and Kyomin Jung},
booktitle={Proceedings of the 30th ACM International Conference on Information & Knowledge Management},
publisher={Association for Computing Machinery},
series={CIKM '21},
year={2021}
}
``` |
sdadas/polish-distilroberta | c37be24a952ac7184b7f1f2e067dbfe75887a7b4 | 2022-02-19T10:29:04.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"license:lgpl-3.0",
"autotrain_compatible"
] | fill-mask | false | sdadas | null | sdadas/polish-distilroberta | 271 | 1 | transformers | 3,178 | ---
license: lgpl-3.0
---
|
patrickvonplaten/bart-large-fp32 | 1840e28a9a0c1b29836aa215086f2faa14efd6c6 | 2022-04-13T09:00:04.000Z | [
"pytorch",
"jax",
"bart",
"feature-extraction",
"en",
"arxiv:1910.13461",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | patrickvonplaten | null | patrickvonplaten/bart-large-fp32 | 271 | null | transformers | 3,179 | ---
license: apache-2.0
language: en
---
**NOTE: This is the FP32 version of [Facebook's official bart-large](https://huggingface.co/facebook/bart-large/edit/main/README.md).**
# BART (large-sized model)
BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart).
Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import BartTokenizer, BartModel
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartModel.from_pretrained('facebook/bart-large')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
webshop/il-choice-bert-image_0 | 3339495679826876c8a1442e20d904d567c0587d | 2022-06-16T01:17:58.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | webshop | null | webshop/il-choice-bert-image_0 | 271 | null | transformers | 3,180 | Entry not found |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | cf6e30c62f43b9b2506e5cc4ee34eacda51d8845 | 2021-10-17T11:17:23.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | 270 | 1 | transformers | 3,181 | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID Madar Corpus26 Model
## Model description
**CAMeLBERT-Mix DID Madar Corpus26 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [MADAR Corpus 26](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 26 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar26')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.8751305937767029},
{'label': 'DOH', 'score': 0.9867215156555176}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Soapsy/DialoGPT-mid-cartman | c6a54d13ceaddeb098bfd853493cec2610205e54 | 2021-08-29T04:52:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Soapsy | null | Soapsy/DialoGPT-mid-cartman | 270 | null | transformers | 3,182 | ---
tags:
- conversational
---
# Cartman DialoGPT Model |
anweasha/DialoGPT-small-Chandler | cba34e787e6aeba8a4976c967650903784aee277 | 2022-02-12T07:47:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | anweasha | null | anweasha/DialoGPT-small-Chandler | 270 | null | transformers | 3,183 | ---
tags:
- conversational
---
# Chandler DialoGPT Model |
arch0345/pocobot | 1236458e880a4c4851ab3fc42348094ade4d2ea2 | 2021-11-27T07:18:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | arch0345 | null | arch0345/pocobot | 270 | null | transformers | 3,184 | ---
tags:
- conversational
---
# pocobot |
castorini/unicoil-msmarco-passage | a9379ff729899cf1255960e604496c1a638346ce | 2021-07-13T22:28:03.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | castorini | null | castorini/unicoil-msmarco-passage | 270 | 1 | transformers | 3,185 | Entry not found |
sampathkethineedi/industry-classification-api | 126bd5e2e52fbca1fbb7f112b2c40679a494ed5a | 2021-05-19T01:29:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers",
"industry tags",
"buisiness description",
"multi-label",
"classification",
"inference"
] | text-classification | false | sampathkethineedi | null | sampathkethineedi/industry-classification-api | 270 | 2 | transformers | 3,186 | ---
language: "en"
thumbnail: "https://huggingface.co/sampathkethineedi"
widget:
- text: "3rd Rock Multimedia Limited is an India-based event management company. The Company conducts film promotions, international events, corporate events and cultural events. The Company's entertainment properties include 3rd Rock Fashion Fiesta and 3rd Rock Calendar. The Company's association with various events in Mumbai includes Bryan Adam's Live in Concert, Michael Learns to Rock (MLTR) Eternity Concert, 3rd Rock's Calendar Launch 2011-2012, Airtel I Phone 4 Launch and ISPL Cricket Tournament 2012."
- text: "Stellar Capital Services Limited is an India-based non-banking financial company. The Company is mainly engaged in the business of providing loans and advances and investing in shares, both quoted and unquoted. The Company's segments are trading in share and securities, and advancing of loans. The trading in share and securities segment includes trading in quoted equity shares, mutual funds, bonds, futures and options, and currency. The Company's financial services include inter corporate deposits, financial consultancy, retail initial public offering (IPO) funding, loan against property, management consultancy, personal loans and unsecured loans."
- text: "Chemcrux Enterprises Ltd is a manufacturer of intermediates for bulk drugs, and dyes and pigments. The Company's products include 2 Chloro Benzoic Acid; 3 Chloro Benzoic Acid; 4 Chloro Benzoic Acid; 4 Nitro Benzoic Acid; 2,4 Dichloro Benzoic Acid; 4 Chloro 3 Nitro Benzoic Acid; 2 Chloro 5 Nitro Benzoic Acid; Meta Nitro Benzoic Acid; Lassamide, and Meta Chloro Per Benzoic Acid. The Company also offers various products on custom requirements, including Aceturic Acid; Meta Chloro Benzoyl Chloride; 3-Nitro-4-Methoxy Benzoic Acid; 2 Amino 5 Sulfonamide Benzoic Acid; 3,4 Dichloro Benzoic Acid; 5-Nitro Salycylic Acid, and 4-Chloro Benzoic Acid -3-Sulfonamide. The Company's plant has a capacity of 120 metric tons per month. The Company exports to Europe, Japan, the Middle East and East Africa. It is engaged in development and execution of various processes, such as High Pressure Oxidation, Nitration and Chloro Sulfonation."
tags:
- bert
- pytorch
- text-classification
- industry tags
- buisiness description
- multi-label
- classification
- inference
liscence: "mit"
---
# industry-classification-api
## Model description
BERT Model to classify a business description into one of **62 industry tags**.
Trained on 7000 samples of Business Descriptions and associated labels of companies in India.
## How to use
PyTorch only
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sampathkethineedi/industry-classification")
model = AutoModelForSequenceClassification.from_pretrained("industry-classification")
industry_tags = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
industry_tags("Stellar Capital Services Limited is an India-based non-banking financial company ... loan against property, management consultancy, personal loans and unsecured loans.")
'''Ouput'''
[{'label': 'Consumer Finance', 'score': 0.9841355681419373}]
```
## Limitations and bias
Training data is only for Indian companies
|
Leviii03/Dialogpt-small-Jake99 | 273d5668183996e54d34638940ea1c19655044a0 | 2021-08-31T11:13:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Leviii03 | null | Leviii03/Dialogpt-small-Jake99 | 269 | null | transformers | 3,187 | ---
tags:
- conversational
---
# Jake99 DialoGPT model |
patrickvonplaten/longformer2roberta-cnn_dailymail-fp16 | ac2a9257b077a75616e2be14912709e7901f5e7b | 2020-12-11T21:59:19.000Z | [
"pytorch",
"encoder_decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | patrickvonplaten | null | patrickvonplaten/longformer2roberta-cnn_dailymail-fp16 | 269 | 3 | transformers | 3,188 | # Longformer2Roberta Summarization with 🤗 EncoderDecoder Framework
This model is a Longformer2Roberta model fine-tuned on summarization.
Longformer2Roberta is a `EncoderDecoderModel`, meaning that both the encoder is a `allenai/longformer-base-4096` model and the decoder is a `roberta-base` model. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the
two pretrained models can simply be loaded into the framework via:
```python
roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-base")
```
The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal
masking for auto-regressiv generation.
Thus, ``longformer2roberta`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model
`longformer2roberta-cnn_dailymail-fp16` is uploaded here.
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable summarization results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 EncoderDecoder Framework.
The model can be used as follows:
```python
from transformers import LongformerTokenizer, EncoderDecoderModel
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16")
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
article = """(CNN)James Holmes made his introduction to the world in a Colorado cinema filled with spectators watching a midnight showing of the new Batman movie, "The Dark Knight Rises," in June 2012. The moment became one of the deadliest shootings in U.S. history. Holmes is accused of opening fire on the crowd, killing 12 people and injuring or maiming 70 others in Aurora, a suburb of Denver. Holmes appeared like a comic book character: He resembled the Joker, with red-orange hair, similar to the late actor Heath Ledger\'s portrayal of the villain in an earlier Batman movie, authorities said. But Holmes was hardly a cartoon. Authorities said he wore body armor and carried several guns, including an AR-15 rifle, with lots of ammo. He also wore a gas mask. Holmes says he was insane at the time of the shootings, and that is his legal defense and court plea: not guilty by reason of insanity. Prosecutors aren\'t swayed and will seek the death penalty. Opening statements in his trial are scheduled to begin Monday. Holmes admits to the shootings but says he was suffering "a psychotic episode" at the time, according to court papers filed in July 2013 by the state public defenders, Daniel King and Tamara A. Brady. Evidence "revealed thus far in the case supports the defense\'s position that Mr. Holmes suffers from a severe mental illness and was in the throes of a psychotic episode when he committed the acts that resulted in the tragic loss of life and injuries sustained by moviegoers on July 20, 2012," the public defenders wrote. Holmes no longer looks like a dazed Joker, as he did in his first appearance before a judge in 2012. He appeared dramatically different in January when jury selection began for his trial: 9,000 potential jurors were summoned for duty, described as one of the nation\'s largest jury calls. Holmes now has a cleaner look, with a mustache, button-down shirt and khaki pants. In January, he had a beard and eyeglasses. If this new image sounds like one of an academician, it may be because Holmes, now 27, once was one. Just before the shooting, Holmes was a doctoral student in neuroscience, and he was studying how the brain works, with his schooling funded by a U.S. government grant. Yet for all his learning, Holmes apparently lacked the capacity to command his own mind, according to the case against him. A jury will ultimately decide Holmes\' fate. That panel is made up of 12 jurors and 12 alternates. They are 19 women and five men, and almost all are white and middle-aged. The trial could last until autumn. When jury summonses were issued in January, each potential juror stood a 0.2% chance of being selected, District Attorney George Brauchler told the final jury this month. He described the approaching trial as "four to five months of a horrible roller coaster through the worst haunted house you can imagine." The jury will have to render verdicts on each of the 165 counts against Holmes, including murder and attempted murder charges. Meanwhile, victims and their relatives are challenging all media outlets "to stop the gratuitous use of the name and likeness of mass killers, thereby depriving violent individuals the media celebrity and media spotlight they so crave," the No Notoriety group says. They are joined by victims from eight other mass shootings in recent U.S. history. Raised in central coastal California and in San Diego, James Eagan Holmes is the son of a mathematician father noted for his work at the FICO firm that provides credit scores and a registered nurse mother, according to the U-T San Diego newspaper. Holmes also has a sister, Chris, a musician, who\'s five years younger, the newspaper said. His childhood classmates remember him as a clean-cut, bespectacled boy with an "exemplary" character who "never gave any trouble, and never got in trouble himself," The Salinas Californian reported. His family then moved down the California coast, where Holmes grew up in the San Diego-area neighborhood of Rancho Peñasquitos, which a neighbor described as "kind of like Mayberry," the San Diego newspaper said. Holmes attended Westview High School, which says its school district sits in "a primarily middle- to upper-middle-income residential community." There, Holmes ran cross-country, played soccer and later worked at a biotechnology internship at the Salk Institute and Miramar College, which attracts academically talented students. By then, his peers described him as standoffish and a bit of a wiseacre, the San Diego newspaper said. Holmes attended college fairly close to home, in a neighboring area known as Southern California\'s "inland empire" because it\'s more than an hour\'s drive from the coast, in a warm, low-desert climate. He entered the University of California, Riverside, in 2006 as a scholarship student. In 2008 he was a summer camp counselor for disadvantaged children, age 7 to 14, at Camp Max Straus, run by Jewish Big Brothers Big Sisters of Los Angeles. He graduated from UC Riverside in 2010 with the highest honors and a bachelor\'s degree in neuroscience. "Academically, he was at the top of the top," Chancellor Timothy P. White said. He seemed destined for even higher achievement. By 2011, he had enrolled as a doctoral student in the neuroscience program at the University of Colorado Anschutz Medical Campus in Aurora, the largest academic health center in the Rocky Mountain region. The doctoral in neuroscience program attended by Holmes focuses on how the brain works, with an emphasis on processing of information, behavior, learning and memory. Holmes was one of six pre-thesis Ph.D. students in the program who were awarded a neuroscience training grant from the National Institutes of Health. The grant rewards outstanding neuroscientists who will make major contributions to neurobiology. A syllabus that listed Holmes as a student at the medical school shows he was to have delivered a presentation about microRNA biomarkers. But Holmes struggled, and his own mental health took an ominous turn. In March 2012, he told a classmate he wanted to kill people, and that he would do so "when his life was over," court documents said. Holmes was "denied access to the school after June 12, 2012, after he made threats to a professor," according to court documents. About that time, Holmes was a patient of University of Colorado psychiatrist Lynne Fenton. Fenton was so concerned about Holmes\' behavior that she mentioned it to her colleagues, saying he could be a danger to others, CNN affiliate KMGH-TV reported, citing sources with knowledge of the investigation. Fenton\'s concerns surfaced in early June, sources told the Denver station. Holmes began to fantasize about killing "a lot of people" in early June, nearly six weeks before the shootings, the station reported, citing unidentified sources familiar with the investigation. Holmes\' psychiatrist contacted several members of a "behavioral evaluation and threat assessment" team to say Holmes could be a danger to others, the station reported. At issue was whether to order Holmes held for 72 hours to be evaluated by mental health professionals, the station reported. "Fenton made initial phone calls about engaging the BETA team" in "the first 10 days" of June, but it "never came together" because in the period Fenton was having conversations with team members, Holmes began the process of dropping out of school, a source told KMGH. Defense attorneys have rejected the prosecution\'s assertions that Holmes was barred from campus. Citing statements from the university, Holmes\' attorneys have argued that his access was revoked because that\'s normal procedure when a student drops enrollment. What caused this turn for the worse for Holmes has yet to be clearly detailed. In the months before the shooting, he bought four weapons and more than 6,000 rounds of ammunition, authorities said. Police said he also booby-trapped his third-floor apartment with explosives, but police weren\'t fooled. After Holmes was caught in the cinema parking lot immediately after the shooting, bomb technicians went to the apartment and neutralized the explosives. No one was injured at the apartment building. Nine minutes before Holmes went into the movie theater, he called a University of Colorado switchboard, public defender Brady has said in court. The number he called can be used to get in contact with faculty members during off hours, Brady said. Court documents have also revealed that investigators have obtained text messages that Holmes exchanged with someone before the shooting. That person was not named, and the content of the texts has not been made public. According to The New York Times, Holmes sent a text message to a fellow graduate student, a woman, about two weeks before the shooting. She asked if he had left Aurora yet, reported the newspaper, which didn\'t identify her. No, he had two months left on his lease, Holmes wrote back, according to the Times. He asked if she had heard of "dysphoric mania," a form of bipolar disorder marked by the highs of mania and the dark and sometimes paranoid delusions of major depression. The woman asked if the disorder could be managed with treatment. "It was," Holmes wrote her, according to the Times. But he warned she should stay away from him "because I am bad news," the newspaper reported. It was her last contact with Holmes. After the shooting, Holmes\' family issued a brief statement: "Our hearts go out to those who were involved in this tragedy and to the families and friends of those involved," they said, without giving any information about their son. Since then, prosecutors have refused to offer a plea deal to Holmes. For Holmes, "justice is death," said Brauchler, the district attorney. In December, Holmes\' parents, who will be attending the trial, issued another statement: They asked that their son\'s life be spared and that he be sent to an institution for mentally ill people for the rest of his life, if he\'s found not guilty by reason of insanity. "He is not a monster," Robert and Arlene Holmes wrote, saying the death penalty is "morally wrong, especially when the condemned is mentally ill." "He is a human being gripped by a severe mental illness," the parents said. The matter will be settled by the jury. CNN\'s Ana Cabrera and Sara Weisfeldt contributed to this report from Denver."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
# should produce
# James Holmes, 27, is accused of opening fire on a Colorado theater.
# He was a doctoral student at University of Colorado.
# Holmes says he was suffering "a psychotic episode" at the time of the shooting.
# Prosecutors won't say whether Holmes was barred from campus.
```
Such an article has a length of > 2000 tokens, which means that it cannot be handled correctly by Bert or Roberta encoders.
## Training script:
**IMPORTANT**: In order for this code to work, make sure you checkout to the branch
[more_general_trainer_metric](https://github.com/huggingface/transformers/tree/more_general_trainer_metric), which slightly adapts
the `Trainer` for `EncoderDecoderModels` according to this PR: https://github.com/huggingface/transformers/pull/5840.
The following code shows the complete training script that was used to fine-tune `longformer2roberta-cnn_dailymail-fp16
` for reproducability. The training last ~90h on a standard GPU.
```python
#!/usr/bin/env python3
import nlp
import logging
from transformers import LongformerTokenizer, EncoderDecoderModel, Trainer, TrainingArguments
logging.basicConfig(level=logging.INFO)
model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-base")
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
# load train and validation data
train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train")
val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:5%]")
# load rouge for validation
rouge = nlp.load_metric("rouge", experiment_id=0)
# enable gradient checkpointing for longformer encoder
model.encoder.config.gradient_checkpointing = True
# set decoding params
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 142
model.config.min_length = 56
model.config.no_repeat_ngram_size = 3
model.early_stopping = True
model.length_penalty = 2.0
model.num_beams = 4
encoder_length = 2048
decoder_length = 128
batch_size = 16
# map data correctly
def map_to_encoder_decoder_inputs(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at Longformer at 2048
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length)
# force summarization <= 128
outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
# set 128 tokens to global attention
batch["global_attention_mask"] = [[1 if i < 128 else 0 for i in range(sequence_length)] for sequence_length in len(inputs.input_ids) * [encoder_length]]
batch["decoder_input_ids"] = outputs.input_ids
batch["labels"] = outputs.input_ids.copy()
# mask loss for padding
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]
]
batch["decoder_attention_mask"] = outputs.attention_mask
assert all([len(x) == encoder_length for x in inputs.input_ids])
assert all([len(x) == decoder_length for x in outputs.input_ids])
return batch
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
# all unnecessary tokens are removed
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.eos_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
# make train dataset ready
train_dataset = train_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
)
train_dataset.set_format(
type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
# same for validation dataset
val_dataset = val_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
)
val_dataset.set_format(
type="torch", columns=["input_ids", "global_attention_mask", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
# set training arguments - these params are not really tuned, feel free to change
training_args = TrainingArguments(
output_dir="./",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
predict_from_generate=True,
evaluate_during_training=True,
do_train=True,
do_eval=True,
logging_steps=1000,
save_steps=1000,
eval_steps=1000,
overwrite_output_dir=True,
warmup_steps=2000,
save_total_limit=3,
fp16=True,
)
# instantiate trainer
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
# start training
trainer.train()
```
## Evaluation
The following script evaluates the model on the test set of
CNN/Daily Mail.
```python
#!/usr/bin/env python3
import nlp
import torch
from transformers import LongformerTokenizer, EncoderDecoderModel
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16")
model.to("cuda")
test_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="test")
batch_size = 32
encoder_length = 2048
decoder_length = 128
# map data correctly
def generate_summary(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at BERT max length 512
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length, return_tensors="pt")
input_ids = inputs.input_ids.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
global_attention_mask = torch.zeros_like(attention_mask)
global_attention_mask[:, :decoder_length] = 1
outputs = model.generate(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)
# all special tokens including will be removed
output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
batch["pred"] = output_str
return batch
results = test_dataset.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"])
# load rouge for validation
rouge = nlp.load_metric("rouge")
pred_str = results["pred"]
label_str = results["highlights"]
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
print(rouge_output)
```
The obtained results should be:
| - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure |
|----------|:-------------:|:------:|:------:|
| **CNN/Daily Mail** | 12.39 | 15.05 | **13.21** |
**Note** This model was trained to show how Longformer can be used as an Encoder model in a EncoderDecoder setup.
Better results are obtained for datasets of much longer inputs.
|
textattack/roberta-base-imdb | 006b34eb086e2f79774853f5f3d2a33ff86e04fa | 2021-05-20T22:16:19.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/roberta-base-imdb | 269 | null | transformers | 3,189 | ## TextAttack Model Card
This `roberta-base` model was fine-tuned for sequence classification using TextAttack
and the imdb dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.91436, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
pollner/dnabertregressor | 9fc0276930efe130e85b12f58a67e7d9f93d0b04 | 2022-07-07T08:17:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | pollner | null | pollner/dnabertregressor | 269 | null | transformers | 3,190 | ---
tags:
- generated_from_trainer
model-index:
- name: dnabertregressor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dnabertregressor
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1368
- Mae: 0.0812
- R2: 0.7815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 402 | 0.2486 | 0.1346 | 0.4418 |
| 0.3662 | 2.0 | 804 | 0.1716 | 0.0969 | 0.7091 |
| 0.1746 | 3.0 | 1206 | 0.1509 | 0.0884 | 0.7573 |
| 0.1305 | 4.0 | 1608 | 0.1443 | 0.0850 | 0.7752 |
| 0.1088 | 5.0 | 2010 | 0.1403 | 0.0830 | 0.7740 |
| 0.1088 | 6.0 | 2412 | 0.1368 | 0.0812 | 0.7815 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BW/TEST | e380b220ceb1b4e93639abd969cbaf9dc3446564 | 2021-09-02T21:47:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BW | null | BW/TEST | 268 | null | transformers | 3,191 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
deepset/bert-medium-squad2-distilled | 39f285470fb8db38d2528fa514188955d3169493 | 2022-07-26T08:32:02.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"exbert",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | deepset | null | deepset/bert-medium-squad2-distilled | 268 | 1 | transformers | 3,192 | ---
language: en
datasets:
- squad_v2
license: mit
thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
tags:
- exbert
model-index:
- name: deepset/bert-medium-squad2-distilled
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 69.8231
verified: true
- name: F1
type: f1
value: 72.9232
verified: true
---
## Overview
**Language model:** deepset/roberta-base-squad2-distilled
**Language:** English
**Training data:** SQuAD 2.0 training set
**Eval data:** SQuAD 2.0 dev set
**Infrastructure**: 1x V100 GPU
**Published**: Apr 21st, 2021
## Details
- haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.
## Hyperparameters
```
batch_size = 6
n_epochs = 2
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 5
distillation_loss_weight = 1
```
## Performance
```
"exact": 68.6431398972458
"f1": 72.7637083790805
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) |
rhollings/DialoGPT_small_steverogers | 3b76da6bab7a8240d51fdfe5905f095c0cfa9355 | 2022-02-10T14:45:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rhollings | null | rhollings/DialoGPT_small_steverogers | 268 | null | transformers | 3,193 | ---
tags:
- conversational
---
# Cpt Rogers DialoGPT Model |
DemocracyStudio/generate_nft_content | 72c626ede502840d0d24354f4f9539281e19e923 | 2022-06-15T19:40:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | DemocracyStudio | null | DemocracyStudio/generate_nft_content | 268 | null | transformers | 3,194 | # Controllable text generation for the marketing content of NFTs
This repository contains all the information, code and datasets of the "Controllable text generation for the marketing content of NFTs" transformers' model started as a group project at the Machine Learning degree of [opencampus.sh](https://opencampus.sh).
You can either clone this repository and run the app.py file locally, or directly use the app in your browser from the [dedicated huggingface space](https://huggingface.co/spaces/DemocracyStudio/generate_nft_content/). First release is June 15th, 2022, further improvements are expected to come.
### Project Description:
While text generation (or natural language generation) refer to computer-generated texts of human-written quality, controllable text generation aims to constrain the generated text by incorporating some pre-specified keywords as manual input.
Since the value of NFTs highly relies on their community engagement, and the capacity to partnering with influencers, marketizing NFTs demands a high production capacity of easily customizable turnkey articles, which gives lots of sense to computer-generated marketing content.
The pitch deck of the project is available [here](https://docs.google.com/presentation/d/1G58GxLDBLTdoXnAwbt_2afVEsp8g25vSVbgN1FHDUmM/edit?usp=sharing).
### Datasets:
[Medium.com](https://medium.com/) is undoubtably a major media platform for content marketing. I've been using selenium to collect about 4000 human-written texts answering to the queries #Nft, #Nftart, #Nftartist, #Nft Collectibles, #Nft Marketplace, and #opensea. The resulting cleaned dataset is available in the dataset folder. It has been cleaned of urls, digits, and filtered out negative or neutral sentiments. So as we're sure the model will only generate enthusiastic content about NFTs.
### Literature:
2021
- [Exploring Transformers in Natural Language Generation: GPT, BERT, and XLNet](https://paperswithcode.com/paper/exploring-transformers-in-natural-language)
- [Parallel Refinements for Lexically Constrained Text Generation with BART](https://paperswithcode.com/paper/parallel-refinements-for-lexically)
- [BARTSCORE: Evaluating Generated Text as Text Generation](https://paperswithcode.com/paper/bartscore-evaluating-generated-text-as-text)
- [Neural Rule-Execution Tracking Machine For Transformer-Based Text Generation](https://paperswithcode.com/paper/neural-rule-execution-tracking-machine-for)
2020
- [The survey: Text generation models in deep learning](https://www.researchgate.net/publication/340622598_The_Survey_Text_Generation_Models_in_Deep_Learning)
- [Modern methods for text generation](https://paperswithcode.com/paper/modern-methods-for-text-generation)
- [PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation](https://paperswithcode.com/paper/pair-planning-and-iterative-refinement-in-pre)
A video recording of the literature review is available [here](https://youtu.be/ffOX3D_dMYY).
|
cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token | ca29945177d43b09e9fb6cdf5f2ac74dc9ab2625 | 2021-05-24T09:59:29.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"arxiv:2010.11784",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token | 267 | null | transformers | 3,195 | ---
language: en
tags:
- biomedical
- lexical-semantics
datasets:
- UMLS
**[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br>
**[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**!
### SapBERT-PubMedBERT
SapBERT by [Liu et al. (2020)](https://arxiv.org/pdf/2010.11784.pdf). Trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AA (English only), using [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) as the base model. Please use the mean-pooling of the output as the representation.
### Citation
```bibtex
@inproceedings{liu-etal-2021-self,
title = "Self-Alignment Pretraining for Biomedical Entity Representations",
author = "Liu, Fangyu and
Shareghi, Ehsan and
Meng, Zaiqiao and
Basaldella, Marco and
Collier, Nigel",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.naacl-main.334",
pages = "4228--4238",
abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.",
}
```
|
ehdwns1516/bert-base-uncased_SWAG | 41f1703c5e32cea8ad48180e4520184daa9677a7 | 2021-08-05T09:49:18.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | ehdwns1516 | null | ehdwns1516/bert-base-uncased_SWAG | 267 | null | transformers | 3,196 | # ehdwns1516/bert-base-uncased_SWAG
* This model has been trained as a [SWAG dataset](https://huggingface.co/ehdwns1516/bert-base-uncased_SWAG).
* Sentence Inference Multiple Choice DEMO: [Ainize DEMO](https://main-sentence-inference-multiple-choice-ehdwns1516.endpoint.ainize.ai/)
* Sentence Inference Multiple Choice API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/sentence_inference_multiple_choice)
## Overview
Language model: [bert-base-uncased](https://huggingface.co/bert-base-uncased)
Language: English
Training data: [SWAG dataset](https://huggingface.co/datasets/swag)
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/Multiple_choice_SWAG_finetunning)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/bert-base-uncased_SWAG")
model = AutoModelForMultipleChoice.from_pretrained("ehdwns1516/bert-base-uncased_SWAG")
def run_model(candicates_count, context: str, candicates: list[str]):
assert len(candicates) == candicates_count, "you need " + candicates_count + " candidates"
choices_inputs = []
for c in candicates:
text_a = "" # empty context
text_b = context + " " + c
inputs = tokenizer(
text_a,
text_b,
add_special_tokens=True,
max_length=128,
padding="max_length",
truncation=True,
return_overflowing_tokens=True,
)
choices_inputs.append(inputs)
input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs])
output = model(input_ids=input_ids)
return {"result": candicates[torch.argmax(output.logits).item()]}
items = list()
count = 4 # candicates count
context = "your context"
for i in range(int(count)):
items.append("sentence")
result = run_model(count, context, items)
```
|
larskjeldgaard/senda | 8de8d068b2cae362622ac8b6b563fa9c585c5520 | 2021-05-19T21:20:48.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"da",
"transformers",
"danish",
"sentiment",
"polarity",
"license:cc-by-4.0"
] | text-classification | false | larskjeldgaard | null | larskjeldgaard/senda | 267 | null | transformers | 3,197 | ---
language: da
tags:
- danish
- bert
- sentiment
- polarity
license: cc-by-4.0
widget:
- text: "Sikke en dejlig dag det er i dag"
---
# Danish BERT fine-tuned for Sentiment Analysis (Polarity)
This model detects polarity ('positive', 'neutral', 'negative') of danish texts.
It is trained and tested on Tweets annotated by [Alexandra Institute](https://github.com/alexandrainst).
Here is an example on how to load the model in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("larskjeldgaard/senda")
model = AutoModelForSequenceClassification.from_pretrained("larskjeldgaard/senda")
# create 'senda' sentiment analysis pipeline
senda_pipeline = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
senda_pipeline("Sikke en dejlig dag det er i dag")
```
|
samitizerxu/wav2vec2-xls-r-300m-zh-CN | 5b34f048b023c51f9042f634269e0b799c0d8785 | 2022-03-23T18:26:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"zh-CN",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"zh",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | samitizerxu | null | samitizerxu/wav2vec2-xls-r-300m-zh-CN | 267 | 1 | transformers | 3,198 | ---
language:
- zh-CN
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- zh
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-zh-CN
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: zh-CN
metrics:
- name: Test WER
type: wer
value: 80
- name: Test CER
type: cer
value: 40.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-CN
metrics:
- name: Test CER
type: cer
value: 69.1
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: zh-CN
metrics:
- name: Test CER
type: cer
value: 43.08
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-zh-CN
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-CN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8828
- Wer: 2.0604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 60.2112 | 0.74 | 500 | 64.8189 | 1.0 |
| 8.1128 | 1.48 | 1000 | 6.8997 | 1.0 |
| 6.0492 | 2.22 | 1500 | 5.9677 | 1.9495 |
| 5.9326 | 2.95 | 2000 | 5.8845 | 1.4092 |
| 5.8763 | 3.69 | 2500 | 5.8460 | 1.6126 |
| 5.7888 | 4.43 | 3000 | 5.7545 | 2.2034 |
| 5.735 | 5.17 | 3500 | 5.6777 | 2.3350 |
| 5.6861 | 5.91 | 4000 | 5.5179 | 2.2232 |
| 5.381 | 6.65 | 4500 | 5.1420 | 2.1816 |
| 4.625 | 7.39 | 5000 | 3.9020 | 2.0722 |
| 4.214 | 8.12 | 5500 | 3.3394 | 2.1430 |
| 3.8992 | 8.86 | 6000 | 2.9085 | 2.1534 |
| 3.6481 | 9.6 | 6500 | 2.6208 | 2.3538 |
| 3.4658 | 10.34 | 7000 | 2.3172 | 2.2271 |
| 3.257 | 11.08 | 7500 | 2.0916 | 2.1351 |
| 3.1294 | 11.82 | 8000 | 1.8954 | 2.2133 |
| 3.0266 | 12.56 | 8500 | 1.7673 | 2.0896 |
| 2.9451 | 13.29 | 9000 | 1.6659 | 2.1381 |
| 2.8802 | 14.03 | 9500 | 1.5637 | 2.1969 |
| 2.78 | 14.77 | 10000 | 1.4921 | 2.2335 |
| 2.7049 | 15.51 | 10500 | 1.4132 | 2.2217 |
| 2.6768 | 16.25 | 11000 | 1.3667 | 2.2232 |
| 2.6358 | 16.99 | 11500 | 1.3111 | 2.1286 |
| 2.5802 | 17.72 | 12000 | 1.2679 | 2.1430 |
| 2.5012 | 18.46 | 12500 | 1.2365 | 2.1153 |
| 2.458 | 19.2 | 13000 | 1.2118 | 2.1573 |
| 2.4433 | 19.94 | 13500 | 1.1992 | 2.1336 |
| 2.438 | 20.68 | 14000 | 1.1803 | 2.1509 |
| 2.418 | 21.42 | 14500 | 1.1601 | 2.1232 |
| 2.3322 | 22.16 | 15000 | 1.1418 | 2.1930 |
| 2.3387 | 22.89 | 15500 | 1.1172 | 2.2464 |
| 2.3349 | 23.63 | 16000 | 1.1144 | 2.1856 |
| 2.291 | 24.37 | 16500 | 1.1018 | 2.1930 |
| 2.2766 | 25.11 | 17000 | 1.0883 | 2.1762 |
| 2.2534 | 25.85 | 17500 | 1.0744 | 2.1875 |
| 2.2393 | 26.59 | 18000 | 1.0561 | 2.1846 |
| 2.2085 | 27.33 | 18500 | 1.0466 | 2.1445 |
| 2.1966 | 28.06 | 19000 | 1.0382 | 2.1089 |
| 2.1794 | 28.8 | 19500 | 1.0264 | 1.9861 |
| 2.1423 | 29.54 | 20000 | 1.0246 | 1.9678 |
| 2.1649 | 30.28 | 20500 | 0.9982 | 2.0005 |
| 2.143 | 31.02 | 21000 | 0.9985 | 2.0450 |
| 2.1338 | 31.76 | 21500 | 0.9932 | 2.0025 |
| 2.1076 | 32.5 | 22000 | 0.9903 | 2.0505 |
| 2.0519 | 33.23 | 22500 | 0.9834 | 2.0737 |
| 2.0534 | 33.97 | 23000 | 0.9756 | 2.0247 |
| 2.0121 | 34.71 | 23500 | 0.9688 | 2.1440 |
| 2.0161 | 35.45 | 24000 | 0.9582 | 2.1232 |
| 2.0178 | 36.19 | 24500 | 0.9480 | 2.0896 |
| 2.0154 | 36.93 | 25000 | 0.9483 | 2.0787 |
| 1.9966 | 37.67 | 25500 | 0.9406 | 2.0297 |
| 1.9753 | 38.4 | 26000 | 0.9419 | 2.0346 |
| 1.9524 | 39.14 | 26500 | 0.9274 | 2.0698 |
| 1.9427 | 39.88 | 27000 | 0.9233 | 2.0787 |
| 1.9258 | 40.62 | 27500 | 0.9182 | 2.0529 |
| 1.9031 | 41.36 | 28000 | 0.9150 | 2.0787 |
| 1.9297 | 42.1 | 28500 | 0.9040 | 2.0505 |
| 1.9041 | 42.84 | 29000 | 0.9009 | 2.0579 |
| 1.8929 | 43.57 | 29500 | 0.8968 | 2.0327 |
| 1.9077 | 44.31 | 30000 | 0.8954 | 2.0619 |
| 1.8504 | 45.05 | 30500 | 0.8922 | 2.0737 |
| 1.8732 | 45.79 | 31000 | 0.8898 | 2.0683 |
| 1.877 | 46.53 | 31500 | 0.8849 | 2.0589 |
| 1.8587 | 47.27 | 32000 | 0.8843 | 2.0450 |
| 1.8236 | 48.01 | 32500 | 0.8810 | 2.0554 |
| 1.8392 | 48.74 | 33000 | 0.8820 | 2.0574 |
| 1.8428 | 49.48 | 33500 | 0.8816 | 2.0668 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-zh-CN --dataset mozilla-foundation/common_voice_7_0 --config zh-CN --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-zh-CN --dataset speech-recognition-community-v2/dev_data --config zh-CN --split validation --chunk_length_s 5.0 --stride_length_s 1.0
``` |
tscholak/2jrayxos | cabd69ed524275a74d36044ddc49f2aca55aaabe | 2022-01-10T21:50:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:cosql",
"dataset:spider",
"arxiv:2109.05093",
"transformers",
"text2sql",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | tscholak | null | tscholak/2jrayxos | 267 | null | transformers | 3,199 | ---
language:
- en
thumbnail: "https://repository-images.githubusercontent.com/401779782/c2f46be5-b74b-4620-ad64-57487be3b1ab"
tags:
- text2sql
widget:
- "And the concert named Auditions? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : sing er_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name ( Super bootcamp, Auditions ), theme, stadium_id, year | singer_in_concert : concert_id, singer_id || Which year did the concert Super bootcamp happen in? | Find the name and location of the stadiums which some concerts happened in the years of both 2014 and 2015."
- "How many singers do we have? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : singer_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name, theme, stadium_id, year | singer_in_concert : concert_id, singer_id"
license: "apache-2.0"
datasets:
- cosql
- spider
metrics:
- cosql
---
## tscholak/2jrayxos
Fine-tuned weights for [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) based on [t5.1.1.lm100k.large](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k).
### Training Data
The model has been fine-tuned on the 2,164 training dialogues in the [CoSQL SQL-grounded dialogue state tracking dataset](https://yale-lily.github.io/cosql) and the 7,000 training examples in the [Spider text-to-SQL dataset](https://yale-lily.github.io/spider). The model solves both, CoSQL's zero-shot text-to-SQL dialogue state tracking task and Spider's zero-shot text-to-SQL translation task. Zero-shot means that the model can generalize to unseen SQL databases.
### Training Objective
This model was initialized with [t5.1.1.lm100k.large](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) and fine-tuned with the text-to-text generation objective.
A question is always grounded in both, a database schema and the preceiding questions in the dialogue. The model is trained to predict the SQL query that would be used to answer the user's current natural language question. The input to the model is composed of the user's current question, the database identifier, a list of tables and their columns, and a sequence of previous questions in reverse chronological order.
```
[current question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... || [previous question] | ... | [first question]
```
The sequence of previous questions is separated by `||` from the linearized schema. In the absence of previous questions (for example, for the first question in a dialogue or for Spider questions), this separator is omitted.
The model outputs the database identifier and the SQL query that will be executed on the database to answer the user's current question in the dialog.
```
[db_id] | [sql]
```
### Performance
Out of the box, this model achieves 52.5 % question match accuracy on the CoSQL development set.
Using the PICARD constrained decoding method (see [the official PICARD implementation](https://github.com/ElementAI/picard)), the model's performance can be improved to **54.2 %** question match accuracy on the CoSQL development set.
### Usage
Please see [the official repository](https://github.com/ElementAI/picard) for scripts and docker images that support evaluation and serving of this model.
### References
1. [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093)
2. [Official PICARD code](https://github.com/ElementAI/picard)
### Citation
```bibtex
@inproceedings{Scholak2021:PICARD,
author = {Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau},
title = "{PICARD}: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.779",
pages = "9895--9901",
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.