modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
uw-madison/nystromformer-512 | 405ccb83538dd58b90104d20af1a08d901c103cd | 2022-01-11T14:13:39.000Z | [
"pytorch",
"nystromformer",
"fill-mask",
"arxiv:2102.03902",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uw-madison | null | uw-madison/nystromformer-512 | 418 | null | transformers | 2,500 | # Nyströmformer
Nyströmformer model for masked language modeling (MLM) pretrained on BookCorpus and English Wikipedia for sequence length 512.
## About Nyströmformer
The Nyströmformer model was proposed in [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh.
The abstract from the paper is the following:
Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences — a topic being actively studied in the community. To address this limitation, we propose Nyströmformer — a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs favorably relative to other efficient self-attention methods. Our code is available at this https URL.
## Usage
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uw-madison/nystromformer-512')
>>> unmasker("Paris is the [MASK] of France.")
[{'score': 0.829957902431488,
'token': 1030,
'token_str': 'capital',
'sequence': 'paris is the capital of france.'},
{'score': 0.022157637402415276,
'token': 16081,
'token_str': 'birthplace',
'sequence': 'paris is the birthplace of france.'},
{'score': 0.01904447190463543,
'token': 197,
'token_str': 'name',
'sequence': 'paris is the name of france.'},
{'score': 0.017583081498742104,
'token': 1107,
'token_str': 'kingdom',
'sequence': 'paris is the kingdom of france.'},
{'score': 0.005948934704065323,
'token': 148,
'token_str': 'city',
'sequence': 'paris is the city of france.'}]
``` |
valurank/distilroberta-hatespeech | b15ea666a2455171fddf83640a469d74c73cb2fe | 2022-06-08T20:30:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/distilroberta-hatespeech | 418 | null | transformers | 2,501 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-hatespeech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-hatespeech
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3619
- Acc: 0.8423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3096 | 1.0 | 4021 | 0.3375 | 0.8540 |
| 0.3711 | 2.0 | 8042 | 0.3305 | 0.8574 |
| 0.322 | 3.0 | 12063 | 0.3398 | 0.8534 |
| 0.3197 | 4.0 | 16084 | 0.3444 | 0.8504 |
| 0.3332 | 5.0 | 20105 | 0.3619 | 0.8423 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ccdv/lsg-bart-base-4096-multinews | 976609dd1abc5f229a55938023010466025b43ff | 2022-07-25T05:31:36.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:multi_news",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ccdv | null | ccdv/lsg-bart-base-4096-multinews | 418 | null | transformers | 2,502 | ---
language:
- en
tags:
- summarization
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-4096-multinews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-multinews", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-multinews", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-4096-multinews
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the [multi_news default](https://huggingface.co/datasets/multi_news) dataset. \
It achieves the following results on the test set:
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 256 | 0 | 768 | 47.10 | 18.94 | 25.22 | 43.13 |
| 4096 | Local | 128 | 0 | 384 | 46.73 | 18.79 | 25.13 | 42.76 |
| 4096 | Pooling | 128 | 4 | 644 | 46.83 | 18.87 | 25.23 | 42.86 |
| 4096 | Stride | 128 | 4 | 644 | 46.83 | 18.68 | 24.98 | 42.88 |
| 4096 | Block Stride | 128 | 4 | 644 | 46.83 | 18.72 | 25.06 | 42.88 |
| 4096 | Norm | 128 | 4 | 644 | 46.74 | 18.60 | 24.93 | 42.79 |
| 4096 | LSH | 128 | 4 | 644 | 46.74 | 18.82 | 25.19 | 42.77 |
With smaller block size (lower ressources):
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 64 | 0 | 192 | 45.61 | 17.91 | 24.54 | 41.65 |
| 4096 | Local | 32 | 0 | 96 | 43.50 | 16.36 | 23.45 | 39.61 |
| 4096 | Pooling | 32 | 4 | 160 | 44.77 | 17.31 | 24.16 | 40.86 |
| 4096 | Stride | 32 | 4 | 160 | 45.29 | 17.81 | 24.45 | 41.40 |
| 4096 | Block Stride | 32 | 4 | 160 | 45.39 | 17.86 | 24.51 | 41.43 |
| 4096 | Norm | 32 | 4 | 160 | 44.65 | 17.25 | 24.09 | 40.76 |
| 4096 | LSH | 32 | 4 | 160 | 44.44 | 17.20 | 24.00 | 40.57 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: multi_news
- dataset_config_name: default
- eval_batch_size: 8
- eval_samples: 5622
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 320
- min_length: 32
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
mrm8488/bert-multi-cased-finedtuned-xquad-tydiqa-goldp | 759a6ee94fec457ac24857f387ec986bafe7e79e | 2021-05-20T00:27:53.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"multilingual",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/bert-multi-cased-finedtuned-xquad-tydiqa-goldp | 417 | 3 | transformers | 2,503 | ---
language: multilingual
thumbnail:
---
# A fine-tuned model on GoldP task from Tydi QA dataset
This model uses [bert-multi-cased-finetuned-xquadv1](https://huggingface.co/mrm8488/bert-multi-cased-finetuned-xquadv1) and fine-tuned on [Tydi QA](https://github.com/google-research-datasets/tydiqa) dataset for Gold Passage task [(GoldP)](https://github.com/google-research-datasets/tydiqa#the-tasks)
## Details of the language model
The base language model [(bert-multi-cased-finetuned-xquadv1)](https://huggingface.co/mrm8488/bert-multi-cased-finetuned-xquadv1) is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) for the **Q&A** downstream task
## Details of the Tydi QA dataset
TyDi QA contains 200k human-annotated question-answer pairs in 11 Typologically Diverse languages, written without seeing the answer and without the use of translation, and is designed for the **training and evaluation** of automatic question answering systems. This repository provides evaluation code and a baseline system for the dataset. https://ai.google.com/research/tydiqa
## Details of the downstream task (Gold Passage or GoldP aka the secondary task)
Given a passage that is guaranteed to contain the answer, predict the single contiguous span of characters that answers the question. the gold passage task differs from the [primary task](https://github.com/google-research-datasets/tydiqa/blob/master/README.md#the-tasks) in several ways:
* only the gold answer passage is provided rather than the entire Wikipedia article;
* unanswerable questions have been discarded, similar to MLQA and XQuAD;
* we evaluate with the SQuAD 1.1 metrics like XQuAD; and
* Thai and Japanese are removed since the lack of whitespace breaks some tools.
## Model training
The model was fine-tuned on a Tesla P100 GPU and 25GB of RAM.
The script is the following:
```python
python run_squad.py \
--model_type bert \
--model_name_or_path mrm8488/bert-multi-cased-finetuned-xquadv1 \
--do_train \
--do_eval \
--train_file /content/dataset/train.json \
--predict_file /content/dataset/dev.json \
--per_gpu_train_batch_size 24 \
--per_gpu_eval_batch_size 24 \
--learning_rate 3e-5 \
--num_train_epochs 2.5 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/model_output \
--overwrite_output_dir \
--save_steps 5000 \
--threads 40
```
## Global Results (dev set):
| Metric | # Value |
| --------- | ----------- |
| **Exact** | **71.06** |
| **F1** | **82.16** |
## Specific Results (per language):
| Language | # Samples | # Exact | # F1 |
| --------- | ----------- |--------| ------ |
| Arabic | 1314 | 73.29 | 84.72 |
| Bengali | 180 | 64.60 | 77.84 |
| English | 654 | 72.12 | 82.24 |
| Finnish | 1031 | 70.14 | 80.36 |
| Indonesian| 773 | 77.25 | 86.36 |
| Korean | 414 | 68.92 | 70.95 |
| Russian | 1079 | 62.65 | 78.55 |
| Swahili | 596 | 80.11 | 86.18 |
| Telegu | 874 | 71.00 | 84.24 |
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
valurank/distilroberta-offensive | 1ece43c97b555234c1a6bd96b04b157cb5b4daa5 | 2022-06-08T20:31:46.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/distilroberta-offensive | 417 | null | transformers | 2,504 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-offensive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-offensive
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4526
- Acc: 0.8975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2321 | 1.0 | 1030 | 0.2404 | 0.9044 |
| 0.2539 | 2.0 | 2060 | 0.2139 | 0.9098 |
| 0.1997 | 3.0 | 3090 | 0.2561 | 0.9090 |
| 0.1663 | 4.0 | 4120 | 0.2409 | 0.9030 |
| 0.1515 | 5.0 | 5150 | 0.3000 | 0.9055 |
| 0.1035 | 6.0 | 6180 | 0.4170 | 0.9027 |
| 0.0466 | 7.0 | 7210 | 0.4526 | 0.8975 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
danielhou13/longformer-finetuned_papers_v2 | 4ee6b71b784563504145329e17dbf615a18e69d6 | 2022-06-28T18:10:53.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
] | text-classification | false | danielhou13 | null | danielhou13/longformer-finetuned_papers_v2 | 417 | null | transformers | 2,505 | Entry not found |
google/roberta2roberta_L-24_bbc | ea245d42ee17b2ade5e6197cee59e5060be120e4 | 2020-12-11T21:43:05.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:xsum",
"arxiv:1907.12461",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | google | null | google/roberta2roberta_L-24_bbc | 416 | 1 | transformers | 2,506 | ---
language: en
license: apache-2.0
datasets:
- xsum
tags:
- summarization
---
# Roberta2Roberta_L-24_bbc EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_bbc/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on extreme summarization on the BBC XSum dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for extreme summarization, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_bbc")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_bbc")
article = """The problem is affecting people using the older
versions of the PlayStation 3, called the "Fat"
model.The problem isn't affecting the newer PS3
Slim systems that have been on sale since
September last year.Sony have also said they are
aiming to have the problem fixed shortly but is
advising some users to avoid using their console
for the time being."We hope to resolve this
problem within the next 24 hours," a statement
reads. "In the meantime, if you have a model other
than the new slim PS3, we advise that you do not
use your PS3 system, as doing so may result in
errors in some functionality, such as recording
obtained trophies, and not being able to restore
certain data."We believe we have identified that
this problem is being caused by a bug in the clock
functionality incorporated in the system."The
PlayStation Network is used by millions of people
around the world.It allows users to play their
friends at games like Fifa over the internet and
also do things like download software or visit
online stores."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Some Sony PlayStation gamers are being advised to stay away from the network because of a problem with the PlayStation 3 network.
```
|
hakurei/litv2-6B-rev2 | 5de12979db26d931ddf3cce2757a5dcb18354ec4 | 2022-06-17T04:51:39.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | hakurei | null | hakurei/litv2-6B-rev2 | 416 | null | transformers | 2,507 | https://wandb.ai/haruu/mesh-transformer-jax/runs/3u6yrxfv |
NTQAI/wav2vec2-large-japanese | cce86173b9b0eceb0fa3f9172b1fe5c04647bdeb | 2021-07-12T04:12:29.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"model-index"
] | automatic-speech-recognition | false | NTQAI | null | NTQAI/wav2vec2-large-japanese | 415 | 4 | transformers | 2,508 | ---
language: ja
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
model-index:
- name: Wav2Vec2 Japanese by NTQAI
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ja
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 81.3
- name: Test CER
type: cer
value: 21.9
---
# Wav2Vec2-Large-Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice), [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut), [TEDxJP](https://github.com/laboroai/TEDxJP-10K) and some other data. This model is a model trained on public data. If you want to use trained model with more 600 hours of data and higher accuracy please contact [email protected]
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ja"
MODEL_ID = "NTQAI/wav2vec2-large-japanese"
SAMPLES = 3
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| 祖母は、おおむね機嫌よく、サイコロをころがしている。 | 祖母思い切れを最布ロぼがしている |
| 財布をなくしたので、交番へ行きます。 | 財布をなく時間ので交番でへ行きます |
| 飲み屋のおやじ、旅館の主人、医者をはじめ、交際のある人にきいてまわったら、みんな、私より収入が多いはずなのに、税金は安い。 | ロみ屋のおやし旅館の主人に医をはめ交載のあの人に聞いて回ったらみんな私より収入が多い発ずなのに請金は安い |
## Evaluation
The model can be evaluated as follows on the Japanese test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ja"
MODEL_ID = "NTQAI/wav2vec2-large-japanese"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| NTQAI/wav2vec2-large-japanese | **73.10%** | **18.15%** |
| vumichien/wav2vec2-large-xlsr-japanese | 1108.86% | 23.40% |
| qqhann/w2v_hf_jsut_xlsr53 | 1012.18% | 70.77% |
|
bond005/wav2vec2-large-ru-golos | 414e4b3c42f74aa71392573715f6939154a8fb50 | 2022-06-21T20:08:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ru",
"dataset:SberDevices/Golos",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | bond005 | null | bond005/wav2vec2-large-ru-golos | 415 | null | transformers | 2,509 | ---
language: ru
datasets:
- SberDevices/Golos
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Russian by Ivan Bondarenko
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (crowd)
type: SberDevices/Golos
args: ru
metrics:
- name: Test WER
type: wer
value: 9.85
- name: Test CER
type: cer
value: 2.36
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (farfield)
type: SberDevices/Golos
args: ru
metrics:
- name: Test WER
type: wer
value: 20.64
- name: Test CER
type: cer
value: 5.55
---
# Wav2Vec2-Large-Ru-Golos
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Russian using the [Sberdevices Golos](https://huggingface.co/datasets/SberDevices/Golos).
When using this model, make sure that your speech input is sampled at 16kHz.
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{bondarenko2022wav2vec2-large-ru-golos,
title={XLSR Wav2Vec2 Russian by Ivan Bondarenko},
author={Bondarenko, Ivan},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/bond005/wav2vec2-large-ru-golos}},
year={2022}
}
``` |
Davlan/xlm-roberta-base-finetuned-amharic | 238236a3b2b9ba6ff7bb7af888d614345e8d8842 | 2021-06-05T20:37:25.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"am",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-amharic | 414 | null | transformers | 2,510 | Hugging Face's logo
---
language: am
datasets:
---
# xlm-roberta-base-finetuned-amharic
## Model description
**xlm-roberta-base-finetuned-amharic** is a **Amharic RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Amharic language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Amharic corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-hausa')
>>> unmasker("የአሜሪካ የአፍሪካ ቀንድ ልዩ መልዕክተኛ ጄፈሪ ፌልትማን በአራት አገራት የሚያደጉትን <mask> መጀመራቸውን የአሜሪካ የውጪ ጉዳይ ሚንስቴር አስታወቀ።")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Amharic CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | am_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 70.96 | 77.97
### BibTeX entry and citation info
By David Adelani
```
```
|
cambridgeltl/mirror-bert-base-uncased-sentence | 3d330e362b17625bc507197dad8cd0e9b242f25e | 2021-09-19T22:47:28.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08027",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/mirror-bert-base-uncased-sentence | 414 | null | transformers | 2,511 | ---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-bert-base-uncased-sentence
An unsupervised sentence encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf). Trained with unlabelled raw sentences, using [bert-base-uncased](https://huggingface.co/bert-base-uncased) as the base model. Please use mean-pooling over *all tokens* (including padded ones) as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
### Citation
```bibtex
@inproceedings{
liu2021fast,
title={Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={EMNLP 2021},
year={2021}
}
```
|
kykim/bertshared-kor-base | 8ea27677d006a547aee2a7753d1ff18a0346860c | 2021-02-23T11:49:50.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"ko",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | kykim | null | kykim/bertshared-kor-base | 414 | 1 | transformers | 2,512 | ---
language: ko
---
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
# only for pytorch in transformers
from transformers import BertTokenizerFast, EncoderDecoderModel
tokenizer = BertTokenizerFast.from_pretrained("kykim/bertshared-kor-base")
model = EncoderDecoderModel.from_pretrained("kykim/bertshared-kor-base")
``` |
valurank/distilroberta-proppy | 4740f6aacb2cee756f792793c86736b845ea2650 | 2022-06-08T20:38:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/distilroberta-proppy | 414 | null | transformers | 2,513 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-proppy
results: []
---
# distilroberta-proppy
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the proppy corpus.
It achieves the following results on the evaluation set:
- Loss: 0.1838
- Acc: 0.9269
## Training and evaluation data
The training data is the [proppy corpus](https://zenodo.org/record/3271522). See [Proppy: Organizing the News
Based on Their Propagandistic Content](https://propaganda.qcri.org/papers/elsarticle-template.pdf) for details.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3179 | 1.0 | 732 | 0.2032 | 0.9146 |
| 0.2933 | 2.0 | 1464 | 0.2026 | 0.9206 |
| 0.2938 | 3.0 | 2196 | 0.1849 | 0.9252 |
| 0.3429 | 4.0 | 2928 | 0.1983 | 0.9221 |
| 0.2608 | 5.0 | 3660 | 0.2310 | 0.9106 |
| 0.2562 | 6.0 | 4392 | 0.1826 | 0.9270 |
| 0.2785 | 7.0 | 5124 | 0.1954 | 0.9228 |
| 0.307 | 8.0 | 5856 | 0.2056 | 0.9200 |
| 0.28 | 9.0 | 6588 | 0.1843 | 0.9259 |
| 0.2794 | 10.0 | 7320 | 0.1782 | 0.9299 |
| 0.2868 | 11.0 | 8052 | 0.1907 | 0.9242 |
| 0.2789 | 12.0 | 8784 | 0.2031 | 0.9216 |
| 0.2827 | 13.0 | 9516 | 0.1976 | 0.9229 |
| 0.2795 | 14.0 | 10248 | 0.1866 | 0.9255 |
| 0.2895 | 15.0 | 10980 | 0.1838 | 0.9269 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.7.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
allenai/PRIMERA-arxiv | 179e7c74d592955689b69a91f033ec4212eba52e | 2022-06-01T22:30:16.000Z | [
"pytorch",
"led",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/PRIMERA-arxiv | 414 | null | transformers | 2,514 | ---
license: apache-2.0
---
HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).
The original code can be found [here](https://github.com/allenai/PRIMER). You can find the script and notebook to train/evaluate the model in the original github repo.
* Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models on the **MultiNews dataset**, and show the results below:
| Model | Rouge-1 | Rouge-2 | Rouge-L |
| --- | ----------- |----------- |----------- |
| PRIMERA | 42.0 | 13.6 | 20.8|
| PRIMERA-hf | 41.7 |13.6 | 20.5|
| PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|
| PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|
You can use it by
```
from transformers import (
AutoTokenizer,
LEDConfig,
LEDForConditionalGeneration,
)
tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')
config=LEDConfig.from_pretrained('allenai/PRIMERA')
model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
``` |
eleldar/language-detection | 735cc09f01b8b4d831228e0fa7640464e56a022e | 2022-05-24T10:06:00.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"arxiv:1911.02116",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | eleldar | null | eleldar/language-detection | 414 | 1 | transformers | 2,515 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-language-detection
results: []
---
# Clone from [https://huggingface.co/papluca/xlm-roberta-base-language-detection](xlm-roberta-base-language-detection)
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset.
## Model description
This model is an XLM-RoBERTa transformer model with a classification head on top (i.e. a linear layer on top of the pooled output).
For additional information please refer to the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) model card or to the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al.
## Intended uses & limitations
You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages:
`arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)`
## Training and evaluation data
The model was fine-tuned on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset, which consists of text sequences in 20 languages. The training set contains 70k samples, while the validation and test sets 10k each. The average accuracy on the test set is **99.6%** (this matches the average macro/weighted F1-score being the test set perfectly balanced). A more detailed evaluation is provided by the following table.
| Language | Precision | Recall | F1-score | support |
|:--------:|:---------:|:------:|:--------:|:-------:|
|ar |0.998 |0.996 |0.997 |500 |
|bg |0.998 |0.964 |0.981 |500 |
|de |0.998 |0.996 |0.997 |500 |
|el |0.996 |1.000 |0.998 |500 |
|en |1.000 |1.000 |1.000 |500 |
|es |0.967 |1.000 |0.983 |500 |
|fr |1.000 |1.000 |1.000 |500 |
|hi |0.994 |0.992 |0.993 |500 |
|it |1.000 |0.992 |0.996 |500 |
|ja |0.996 |0.996 |0.996 |500 |
|nl |1.000 |1.000 |1.000 |500 |
|pl |1.000 |1.000 |1.000 |500 |
|pt |0.988 |1.000 |0.994 |500 |
|ru |1.000 |0.994 |0.997 |500 |
|sw |1.000 |1.000 |1.000 |500 |
|th |1.000 |0.998 |0.999 |500 |
|tr |0.994 |0.992 |0.993 |500 |
|ur |1.000 |1.000 |1.000 |500 |
|vi |0.992 |1.000 |0.996 |500 |
|zh |1.000 |1.000 |1.000 |500 |
### Benchmarks
As a baseline to compare `xlm-roberta-base-language-detection` against, we have used the Python [langid](https://github.com/saffsd/langid.py) library. Since it comes pre-trained on 97 languages, we have used its `.set_languages()` method to constrain the language set to our 20 languages. The average accuracy of langid on the test set is **98.5%**. More details are provided by the table below.
| Language | Precision | Recall | F1-score | support |
|:--------:|:---------:|:------:|:--------:|:-------:|
|ar |0.990 |0.970 |0.980 |500 |
|bg |0.998 |0.964 |0.981 |500 |
|de |0.992 |0.944 |0.967 |500 |
|el |1.000 |0.998 |0.999 |500 |
|en |1.000 |1.000 |1.000 |500 |
|es |1.000 |0.968 |0.984 |500 |
|fr |0.996 |1.000 |0.998 |500 |
|hi |0.949 |0.976 |0.963 |500 |
|it |0.990 |0.980 |0.985 |500 |
|ja |0.927 |0.988 |0.956 |500 |
|nl |0.980 |1.000 |0.990 |500 |
|pl |0.986 |0.996 |0.991 |500 |
|pt |0.950 |0.996 |0.973 |500 |
|ru |0.996 |0.974 |0.985 |500 |
|sw |1.000 |1.000 |1.000 |500 |
|th |1.000 |0.996 |0.998 |500 |
|tr |0.990 |0.968 |0.979 |500 |
|ur |0.998 |0.996 |0.997 |500 |
|vi |0.971 |0.990 |0.980 |500 |
|zh |1.000 |1.000 |1.000 |500 |
## Training procedure
Fine-tuning was done via the `Trainer` API.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
The validation results on the `valid` split of the Language Identification dataset are summarised here below.
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2492 | 1.0 | 1094 | 0.0149 | 0.9969 | 0.9969 |
| 0.0101 | 2.0 | 2188 | 0.0103 | 0.9977 | 0.9977 |
In short, it achieves the following results on the validation set:
- Loss: 0.0101
- Accuracy: 0.9977
- F1: 0.9977
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
JorisCos/ConvTasNet_Libri2Mix_sepclean_16k | e1ef95ab7a037950f3a606b9a56760cf94701d3d | 2021-09-23T15:48:54.000Z | [
"pytorch",
"dataset:Libri2Mix",
"dataset:sep_clean",
"asteroid",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | JorisCos | null | JorisCos/ConvTasNet_Libri2Mix_sepclean_16k | 413 | 1 | asteroid | 2,516 | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 15.243671356901526
si_sdr_imp: 15.243034178473609
sdr: 15.668108919568112
sdr_imp: 15.578229918028036
sir: 25.295100756629957
sir_imp: 25.205219921301754
sar: 16.307682590197313
sar_imp: -51.64989963759405
stoi: 0.9394951175291422
stoi_imp: 0.22640192740016568
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
ckiplab/albert-tiny-chinese-pos | 0759335eb37766726e12e173bd422237b70ad377 | 2022-05-10T03:28:11.000Z | [
"pytorch",
"albert",
"token-classification",
"zh",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | token-classification | false | ckiplab | null | ckiplab/albert-tiny-chinese-pos | 413 | null | transformers | 2,517 | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- albert
- zh
license: gpl-3.0
---
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese-pos')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
facebook/maskformer-swin-small-coco | 3b42291baa14175ec3b47525de9c7e61f72c3ab7 | 2022-04-04T16:01:56.000Z | [
"pytorch",
"maskformer",
"dataset:coco",
"arxiv:2107.06278",
"transformers",
"vision",
"image-segmentatiom",
"license:apache-2.0"
] | null | false | facebook | null | facebook/maskformer-swin-small-coco | 413 | null | transformers | 2,518 | ---
license: apache-2.0
tags:
- vision
- image-segmentatiom
datasets:
- coco
---
# Mask
Mask model trained on coco. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses semantic segmentation with a mask classification paradigm instead.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=maskformer) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to feature_extractor for postprocessing
>>> output = feature_extractor.post_process_segmentation(outputs)
>>> output = feature_extractor.post_process_semantic_segmentation(outputs)
>>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
rjbownes/Magic-The-Generating | 07237dc6dfab8654be4ffd6e067e4671726420b5 | 2021-05-23T12:17:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | rjbownes | null | rjbownes/Magic-The-Generating | 412 | null | transformers | 2,519 | ---
widget:
- text: "Even the Dwarves"
- text: "The secrets of"
---
# Model name
Magic The Generating
## Model description
This is a fine tuned GPT-2 model trained on a corpus of all available English language Magic the Gathering card flavour texts.
## Intended uses & limitations
This is intended only for use in generating new, novel, and sometimes surprising, MtG like flavour texts.
#### How to use
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("rjbownes/Magic-The-Generating")
model = GPT2LMHeadModel.from_pretrained("rjbownes/Magic-The-Generating")
```
#### Limitations and bias
The training corpus was surprisingly small, only ~29000 cards, I had suspected there were more. This might mean there is a real limit to the number of entirely original strings this will generate.
This is also only based on the 117M parameter GPT2, it's a pretty obvious upgrade to retrain with medium, large or XL models. However, despite this, the outputs I tested were very convincing!
## Training data
The data was 29222 MtG card flavour texts. The model was based on the "gpt2" pretrained transformer: https://huggingface.co/gpt2.
## Training procedure
Only English language MtG flavour texts were scraped from the [Scryfall](https://scryfall.com/) API. Empty strings and any non-UTF-8 encoded tokens were removed leaving 29222 entries.
This was trained using google Colab with a T4 instance. 4 epochs, adamW optimizer with default parameters and a batch size of 32. Token embedding lengths were capped at 98 tokens as this was the longest string and an attention mask was added to the training model to ignore all padding tokens.
## Eval results
Average Training Loss: 0.44866578806635815.
Validation loss: 0.5606984243444775.
Sample model outputs:
1. "Every branch a crossroads, every vine a swift steed."
—Gwendlyn Di Corci
2. "The secrets of this world will tell their masters where to strike if need be."
—Noyan Dar, Tazeem roilmage
3. "The secrets of nature are expensive. You'd be better off just to have more freedom."
4. "Even the Dwarves knew to leave some stones unturned."
5. "The wise always keep an ear open to the whispers of power."
### BibTeX entry and citation info
```bibtex
@article{BownesLM,
title={Fine Tuning GPT-2 for Magic the Gathering flavour text generation.},
author={Richard J. Bownes},
journal={Medium},
year={2020}
}
```
|
valurank/distilroberta-propaganda-2class | 7f136a28a2485f117da9b2dd5612f0f2a506a07b | 2022-06-08T20:39:15.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/distilroberta-propaganda-2class | 412 | 1 | transformers | 2,520 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-propaganda-2class
results: []
---
# distilroberta-propaganda-2class
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the QCRI propaganda dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5087
- Acc: 0.7424
## Training and evaluation data
Training data is the 19-class QCRI propaganda data, with all propaganda classes collapsed to a single catch-all 'prop' class.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5737 | 1.0 | 493 | 0.5998 | 0.6515 |
| 0.4954 | 2.0 | 986 | 0.5530 | 0.7080 |
| 0.4774 | 3.0 | 1479 | 0.5331 | 0.7258 |
| 0.4846 | 4.0 | 1972 | 0.5247 | 0.7339 |
| 0.4749 | 5.0 | 2465 | 0.5392 | 0.7199 |
| 0.502 | 6.0 | 2958 | 0.5124 | 0.7466 |
| 0.457 | 7.0 | 3451 | 0.5167 | 0.7432 |
| 0.4899 | 8.0 | 3944 | 0.5160 | 0.7428 |
| 0.4833 | 9.0 | 4437 | 0.5280 | 0.7339 |
| 0.5114 | 10.0 | 4930 | 0.5112 | 0.7436 |
| 0.4419 | 11.0 | 5423 | 0.5060 | 0.7525 |
| 0.4743 | 12.0 | 5916 | 0.5031 | 0.7547 |
| 0.4597 | 13.0 | 6409 | 0.5043 | 0.7517 |
| 0.4861 | 14.0 | 6902 | 0.5055 | 0.7487 |
| 0.499 | 15.0 | 7395 | 0.5091 | 0.7419 |
| 0.501 | 16.0 | 7888 | 0.5037 | 0.7521 |
| 0.4659 | 17.0 | 8381 | 0.5087 | 0.7424 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.7.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ishan/bert-base-uncased-mnli | d88bc3e2a64c17016f3638b1c5379d8709a1d3d7 | 2021-05-19T20:32:21.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:MNLI",
"arxiv:1810.04805",
"transformers"
] | text-classification | false | ishan | null | ishan/bert-base-uncased-mnli | 411 | 1 | transformers | 2,521 | ---
language: en
thumbnail:
tags:
- pytorch
- text-classification
datasets:
- MNLI
---
# bert-base-uncased finetuned on MNLI
## Model Details and Training Data
We used the pretrained model from [bert-base-uncased](https://huggingface.co/bert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset.
The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).
## Evaluation Results
The evaluation results are mentioned in the table below.
| Test Corpus | Accuracy |
|:---:|:---------:|
| Matched | 0.8456 |
| Mismatched | 0.8484 |
|
sentence-transformers/paraphrase-albert-base-v2 | ba7fa0a9a209d687c911afb21ced12769ac05c7e | 2022-06-15T22:21:37.000Z | [
"pytorch",
"tf",
"albert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/paraphrase-albert-base-v2 | 411 | null | sentence-transformers | 2,522 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-albert-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-albert-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-albert-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-albert-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-albert-base-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
unc-nlp/lxmert-vqa-uncased | 945e6172c6c93befbe8d64683f22d34cb567e8ab | 2020-09-10T17:57:42.000Z | [
"pytorch",
"lxmert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | unc-nlp | null | unc-nlp/lxmert-vqa-uncased | 411 | null | transformers | 2,523 | Entry not found |
QuoQA-NLP/KE-T5-En2Ko-Base | 3a4e5fdb3663e678336c7dce8df1e46cae889c68 | 2022-07-08T22:35:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | QuoQA-NLP | null | QuoQA-NLP/KE-T5-En2Ko-Base | 411 | null | transformers | 2,524 | Entry not found |
HooshvareLab/albert-fa-zwnj-base-v2 | 2315c0d4ddd6bd68f7703c5651978706bf8aff37 | 2021-03-16T16:36:38.000Z | [
"pytorch",
"tf",
"albert",
"fill-mask",
"fa",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | HooshvareLab | null | HooshvareLab/albert-fa-zwnj-base-v2 | 410 | null | transformers | 2,525 | ---
language: fa
license: apache-2.0
---
# ALBERT-Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
> Call it little_berty
### BibTeX entry and citation info
Please cite in your publication as the following:
```bibtex
@misc{ALBERTPersian,
author = {Hooshvare Team},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. |
mrm8488/t5-base-finetuned-imdb-sentiment | d9d412418ff1a359b7783eeebd5b318791f00765 | 2020-12-11T21:55:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:imdb",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-imdb-sentiment | 408 | null | transformers | 2,526 | ---
language: en
datasets:
- imdb
---
# T5-base fine-tuned for Sentiment Anlalysis 🎞️👍👎
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [IMDB](https://huggingface.co/datasets/imdb) dataset for **Sentiment Analysis** downstream task.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (Sentiment analysis) - Dataset 📚
[IMDB](https://huggingface.co/datasets/imdb)
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of **25,000** highly polar movie reviews for training, and **25,000** for testing.
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him!
## Test set metrics 🧾
|precision | recall | f1-score |support|
|----------|----------|---------|----------|-------|
|negative | 0.95 | 0.95| 0.95| 12500|
|positive | 0.95 | 0.95| 0.95| 12500|
|----------|----------|---------|----------|-------|
|accuracy| | | 0.95| 25000|
|macro avg| 0.95| 0.95| 0.95| 25000|
|weighted avg| 0.95| 0.95| 0.95 | 25000|
## Model in Action 🚀
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-imdb-sentiment")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-imdb-sentiment")
def get_sentiment(text):
input_ids = tokenizer.encode(text + '</s>', return_tensors='pt')
output = model.generate(input_ids=input_ids,
max_length=2)
dec = [tokenizer.decode(ids) for ids in output]
label = dec[0]
return label
get_sentiment("I dislike a lot that film")
# Output: 'negative'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
salesken/xlm-roberta-base-finetuned-mnli-cross-lingual-transfer | 713c697b0cc891f9d05642592426678786293828 | 2021-07-21T09:25:02.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"dataset:mnli",
"dataset:xnli",
"transformers",
"sentence-similarity",
"zero-shot-classification",
"salesken",
"hindi",
"cross-lingual"
] | text-classification | false | salesken | null | salesken/xlm-roberta-base-finetuned-mnli-cross-lingual-transfer | 408 | 1 | transformers | 2,527 | ---
datasets:
- mnli
- xnli
tags:
- sentence-similarity
- transformers
- text-classification
- zero-shot-classification
- salesken
- hindi
- cross-lingual
inference: false
---
# XLM-R Base
A multilingual model is pre-trained on text coming from a mix of languages. We will look at a multilingual model called XLM-R from Facebook also called as Cross Lingual Model - Roberta. Unlike BERT which was pre-trained on English Wikipedia and BookCorpus, XLM-R was pre-trained on Wikipedia and Common Crawl data from 100 different languages. A single model trained on 100 different languages.
* So, there is a single shared vocabulary with 250K tokens to cover all 100 languages.
* There is no special marker which suggests which language it is.
* It wasn't trained with parallel data (i.e. same sentence in multiple languages)
# Cross Lingual Transfer (Zero-shot transfer)
Say you are building a classifier which will classify Hindi comments toxic or non-toxic. But say you have a training dataset in English of 225 K labeled comments called "Wikipedia Toxic Comments". The Cross Lingual Transfer of XLM-R will let you fine-tune XLM-R on Wikipedia Toxic Comments dataset in English, and then apply it to Hindi comments. So, XLM-R is able to take it's task specific knowledge that it learned in English and apply it on Hindi, even though we never showed it any Hindi exmaples. It's concept of transfer learning applied from one language to another which is called Cross Lingual Transfer. The same could be done on any other low-resource languages
We will see that training XLM-R purely on ~400K English samples acutally yields better results than fine-tuning a monolingual Hindi model on a much smaller Hindi dataset. This is also referred to as Zero-Shot Learning or Cross-Lingual Transfer
# Our Approach -
We will use MNLI that provides us with a large number of English training examples to fine-tune XLM-R on the general task of NLI. XNLI will provide us with a small number of NLI test examples in different languages for us it will be Hindi. MNLI (Multi-genre NLI) has 392,000 training examples and 20,000 development examples and 20,000 test examples. On the other hand XNLI is a small subset of examples from MNLI dataset which have been human-translated to 14 different languages and yes Hindi is a part of it. For each language we have 5,000 test set sentence pairs and 2,500 development set sentence pairs
# Fine-tuning Hyperparameters -
learning rate : 5e-6 with linear learning rate scheduling over total steps
epochs = 2
batch_size = 32
GPU : Tesla T4
# Using this model
This model can be used using Huggingface Transformers. I have created a pipeline for a sentence pair classification. Hope it will be useful.
```python
!pip3 install transformers
import transformers
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("salesken/xlm-roberta-base-finetuned-mnli-cross-lingual-transfer")
model = AutoModelForSequenceClassification.from_pretrained("salesken/xlm-roberta-base-finetuned-mnli-cross-lingual-transfer")
device = torch.device('cuda')
print('GPU: ', torch.cuda.get_device_name(0))
model.to(device)
# Here I am giving the sentence pairs and also the ground truth values (1 : Similar, 0 : NotSimilar)
# we have purposely used Hindi-Hindi , Hindi-English, Transliterated Hindi-Engish, Kannada and Spanish and also code-switching of languages
# You can try your own pairs and languages
u_premise = ['क्या आपका बेटा गणित में रूचि रखता है',
'क्या आपका बेटा गणित में रूचि रखता है',
'क्या आपका बेटा गणित में रूचि रखता है',
'मैं त्योहार के दौरान घर जा रहा हूँ',
'क्या आपका बेटा गणित में रूचि रखता है',
'hum ek cross lingual model banane ka soch rahe hai',
'we are planning to build a cross-lingual language model',
'Hi. A very good morning to all of you. Lets get started. ',
'शुभ प्रभात',
'how are you doing today sir ?',
'how are you doing today sir ?',
'Argentina and Brazil will play Copa America finals this weekend',
'I want to visit Spain and Portugal next year'
]
u_hypothesis = ['क्या आपकी बेटी को विज्ञान में दिलचस्पी है',
'is your daughther interested in science',
'is your son interested in stem subjects',
'इस साल मैं त्योहार के दौरान घर नहीं जाऊंगा',
'इस साल मैं त्योहार के दौरान घर नहीं जाऊंगा',
'हम एक क्रॉस-लिंगुअल लैंग्वेज मॉडल बनाने की योजना बना रहे हैं',
'hum ek क्रॉस-लिंगुअल model bana rahe hai',
'शुभ प्रभात',
'subh prabhat to all of you',
'ಇಂದು ನೀವು ಹೇಗಿದ್ದೀರಿ ಸರ್?',
'ನನ್ನ ಸ್ನಾತಕೋತ್ತರರಿಗಾಗಿ ನಾನು ಯಂತ್ರ ಕಲಿಕೆ ಮತ್ತು ಡೇಟಾ ವಿಜ್ಞಾನವನ್ನು ಮಾಡುತ್ತಿದ್ದೇನೆ',
'Argentina y Brasil jugarán la final de la Copa América este fin de semana',
'Puedes ver la aurora boreal desde Noruega']
ground_truth = [1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0]
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
import textwrap
import pandas as pd
labels_ar = []
input_ids_ar = []
attn_masks_ar = []
max_len = 128
batch_size = 32
label_names = ['entailment', 'neutral', 'contradiction']
for u_prem, u_hyp in zip(u_premise, u_hypothesis):
encoded_dict = tokenizer.encode_plus(u_prem, u_hyp,
max_length=max_len,
padding='max_length',
truncation=True,
return_tensors='pt')
# Add this example to our lists.
input_ids_ar.append(encoded_dict['input_ids'])
attn_masks_ar.append(encoded_dict['attention_mask'])
# Convert each Python list of Tensors into a 2D Tensor matrix.
input_ids_ar = torch.cat(input_ids_ar, dim=0)
attn_masks_ar = torch.cat(attn_masks_ar, dim=0)
# Construct a TensorDataset from the encoded examples.
prediction_dataset = TensorDataset(input_ids_ar, attn_masks_ar)
# And a dataloader for handling batching.
prediction_dataloader = DataLoader(prediction_dataset, batch_size=batch_size)
# Put model in evaluation mode
model.eval()
# Tracking variables
predictions , true_labels = [], []
count = 0
# Predict
for batch in prediction_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask = batch
with torch.no_grad():
# Forward pass, calculate logit predictions
outputs = model(b_input_ids,
attention_mask=b_input_mask)
logits = outputs[0]
# Move logits to CPU
logits = logits.detach().cpu().numpy()
# Store predictions and true labels
predictions.append(logits)
# Combine the results across all batches.
flat_predictions = np.concatenate(predictions, axis=0)
# For each sample, pick the label (0, 1, or 2) with the highest score.
predicted_labels = np.argmax(flat_predictions, axis=1).flatten()
wrapper = textwrap.TextWrapper(width = 80, initial_indent=' ',subsequent_indent=' ')
def softmax(logits):
return np.exp(logits) / np.sum(np.exp(logits))
finalOut = []
for sentA, sentB, pred_logits, grnd_truth in zip(u_premise, u_hypothesis, flat_predictions, ground_truth):
probScore = softmax(pred_logits)
maxScore = {'idx': 0, 'score': 0}
for idx, score in enumerate(probScore):
if score > maxScore['score']:
maxScore['score'] = score
maxScore['idx'] = idx
if maxScore['score'] > 0.4:
if maxScore['idx'] == 0:
label = 'Similar'
else:
label = 'NotSimilar'
finalOut.append((sentA, sentB, label, grnd_truth, maxScore['score']))
pd.set_option("display.max_colwidth", -1)
pd.DataFrame(finalOut, columns = ['sentence-1', 'sentence-2', 'semantic_matching', 'ground_truth', 'probability_score'])
```
|
sshleifer/distilbart-xsum-6-6 | 9a31391363021090d1fceb38a9c83ecd0d0dc020 | 2021-06-14T08:25:26.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"dataset:xsum",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | sshleifer | null | sshleifer/distilbart-xsum-6-6 | 408 | null | transformers | 2,528 | ---
language: en
tags:
- summarization
license: apache-2.0
datasets:
- cnn_dailymail
- xsum
thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png
---
### Usage
This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information.
### Metrics for DistilBART models
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L |
|:---------------------------|------------:|----------------------:|----------:|----------:|----------:|
| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 |
| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 |
| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 |
| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 |
| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 |
| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 |
| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 |
| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 |
| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 |
| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
|
squirro/albert-base-v2-squad_v2 | f64291edf2204d5c9044c2d45867cf31bab5e054 | 2022-06-29T08:54:25.000Z | [
"pytorch",
"tf",
"onnx",
"albert",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | squirro | null | squirro/albert-base-v2-squad_v2 | 408 | 1 | transformers | 2,529 | ---
license: apache-2.0
language: en
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: albert-base-v2-squad_v2
results:
- task:
name: Question Answering
type: question-answering
dataset:
type: squad_v2 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: The Stanford Question Answering Dataset
args: en
metrics:
- type: eval_exact
value: 78.8175
- type: eval_f1
value: 81.9984
- type: eval_HasAns_exact
value: 75.3374
- type: eval_HasAns_f1
value: 81.7083
- type: eval_NoAns_exact
value: 82.2876
- type: eval_NoAns_f1
value: 82.2876
---
# albert-base-v2-squad_v2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
## Model description
This model is fine-tuned on the extractive question answering task -- The Stanford Question Answering Dataset -- [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/).
For convenience this model is prepared to be used with the frameworks `PyTorch`, `Tensorflow` and `ONNX`.
## Intended uses & limitations
This model can handle mismatched question-context pairs. Make sure to specify `handle_impossible_answer=True` when using `QuestionAnsweringPipeline`.
__Example usage:__
```python
>>> from transformers import AutoModelForQuestionAnswering, AutoTokenizer, QuestionAnsweringPipeline
>>> model = AutoModelForQuestionAnswering.from_pretrained("squirro/albert-base-v2-squad_v2")
>>> tokenizer = AutoTokenizer.from_pretrained("squirro/albert-base-v2-squad_v2")
>>> qa_model = QuestionAnsweringPipeline(model, tokenizer)
>>> qa_model(
>>> question="What's your name?",
>>> context="My name is Clara and I live in Berkeley.",
>>> handle_impossible_answer=True # important!
>>> )
{'score': 0.9027367830276489, 'start': 11, 'end': 16, 'answer': 'Clara'}
```
## Training and evaluation data
Training and evaluation was done on [SQuAD2.0](https://huggingface.co/datasets/squad_v2).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| key | value |
|:-------------------------|--------------:|
| epoch | 3 |
| eval_HasAns_exact | 75.3374 |
| eval_HasAns_f1 | 81.7083 |
| eval_HasAns_total | 5928 |
| eval_NoAns_exact | 82.2876 |
| eval_NoAns_f1 | 82.2876 |
| eval_NoAns_total | 5945 |
| eval_best_exact | 78.8175 |
| eval_best_exact_thresh | 0 |
| eval_best_f1 | 81.9984 |
| eval_best_f1_thresh | 0 |
| eval_exact | 78.8175 |
| eval_f1 | 81.9984 |
| eval_samples | 12171 |
| eval_total | 11873 |
| train_loss | 0.775293 |
| train_runtime | 1402 |
| train_samples | 131958 |
| train_samples_per_second | 282.363 |
| train_steps_per_second | 1.104 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
---
# About Us
<img src="https://squirro.com/wp-content/themes/squirro/img/squirro_logo.svg" alt="Squirro Logo" width="250"/>
Squirro marries data from any source with your intent, and your context to intelligently augment decision-making - right when you need it!
An Insight Engine at its core, Squirro works with global organizations, primarily in financial services, public sector, professional services, and manufacturing, among others. Customers include Bank of England, European Central Bank (ECB), Deutsche Bundesbank, Standard Chartered, Henkel, Armacell, Candriam, and many other world-leading firms.
Founded in 2012, Squirro is currently present in Zürich, London, New York, and Singapore. Further information about AI-driven business insights can be found at http://squirro.com.
## Social media profiles:
- Redefining AI Podcast (Spotify): https://open.spotify.com/show/6NPLcv9EyaD2DcNT8v89Kb
- Redefining AI Podcast (Apple Podcasts): https://podcasts.apple.com/us/podcast/redefining-ai/id1613934397
- Squirro LinkedIn: https://www.linkedin.com/company/squirroag
- Squirro Academy LinkedIn: https://www.linkedin.com/showcase/the-squirro-academy
- Twitter: https://twitter.com/Squirro
- Facebook: https://www.facebook.com/squirro
- Instagram: https://www.instagram.com/squirro/ |
Sheel/DialoGPT-small-harrypotter | 446603b6bad1a87aea7e0c7124b79773337ade93 | 2021-10-24T22:13:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Sheel | null | Sheel/DialoGPT-small-harrypotter | 407 | null | transformers | 2,530 | ---
tags:
- conversational
---
# Harry Potter DialGPT Model |
pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP | 1d9bf246aabd98e00bf6af2c70470801f86d4249 | 2022-07-26T00:01:00.000Z | [
"pytorch",
"longt5",
"text2text-generation",
"dataset:kmfoda/booksum",
"transformers",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | pszemraj | null | pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP | 407 | 0 | transformers | 2,531 | ---
tags:
- summarization
- summary
- booksum
- long-document
- long-form
license: apache-2.0
datasets:
- kmfoda/booksum
metrics:
- rouge
inference: False
model-index:
- name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 34.5471
verified: true
- name: ROUGE-2
type: rouge
value: 5.8126
verified: true
- name: ROUGE-L
type: rouge
value: 15.5684
verified: true
- name: ROUGE-LSUM
type: rouge
value: 31.878
verified: true
- name: loss
type: loss
value: .nan
verified: true
- name: gen_len
type: gen_len
value: 294.7317
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 25.4769
verified: true
- name: ROUGE-2
type: rouge
value: 6.0336
verified: true
- name: ROUGE-L
type: rouge
value: 17.9249
verified: true
- name: ROUGE-LSUM
type: rouge
value: 22.0692
verified: true
- name: loss
type: loss
value: .nan
verified: true
- name: gen_len
type: gen_len
value: 53.8242
verified: true
---
# long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
> NOTE: this is still a work-in-progress (WIP) and not completed/converged by any means, but sharing to maybe save some time for others :)
## Updates
_As I make updates to this WIP checkpoint I will post a note here._
- July 8, 2022: add checkpoint with ~4 epochs of training on A100, equating to approx 350 steps of functional batch size 128
- July 4, 2022: add checkpoint with 6 additional epochs of training with the dataset summary outputs filtered to 1024 **tokens**, resolving prior issue of short summaries.
## About
- a checkpoint of [Stancld/longt5-tglobal-large-16384-pubmed-3k_steps](https://huggingface.co/Stancld/longt5-tglobal-large-16384-pubmed-3k_steps) trained on `kmfoda/booksum` for about 26 epochs
- max input lengths during training vary between 8192 and 16384 tokens depending on GPU availability. This checkpoint was **trained with 16384 tokens as the max input length for the final 10+ epochs**
## Comparisons
- compare to [pszemraj/led-large-book-summary](https://huggingface.co/pszemraj/led-large-book-summary).
- **inference API has been disabled because it's too compute-intensive :/**
|
WangZeJun/simbert-base-chinese | b5c82a8ab1e4bcac799620fc4d870aae087b0c71 | 2022-06-14T09:17:59.000Z | [
"pytorch",
"transformers"
] | null | false | WangZeJun | null | WangZeJun/simbert-base-chinese | 406 | 2 | transformers | 2,532 | https://github.com/zejunwang1/bert4vec |
ethanyt/guwen-ner | 591273ff61bb5eb453f47c300f577058d6d2ee15 | 2021-06-17T09:23:09.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"zh",
"transformers",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | ethanyt | null | ethanyt/guwen-ner | 405 | 2 | transformers | 2,533 | ---
language:
- "zh"
thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "pytorch"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "及秦始皇,灭先代典籍,焚书坑儒,天下学士逃难解散,我先人用藏其家书于屋壁。汉室龙兴,开设学校,旁求儒雅,以阐大猷。济南伏生,年过九十,失其本经,口以传授,裁二十馀篇,以其上古之书,谓之尚书。百篇之义,世莫得闻。"
---
# Guwen NER
A Classical Chinese Named Entity Recognizer.
Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to
[Guwen Models](https://github.com/ethan-yt/guwen-models).
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a> |
google/pegasus-wikihow | 3243b584d0d8cdc6e1a28ffd2ec73b623818b2da | 2020-10-22T16:33:36.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"arxiv:1912.08777",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | google | null | google/pegasus-wikihow | 405 | null | transformers | 2,534 | ---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
google/tapas-small | 856e82f8bad889c575161ea1752c67dd852f56a8 | 2021-11-29T10:12:54.000Z | [
"pytorch",
"tf",
"tapas",
"feature-extraction",
"en",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"TapasModel",
"license:apache-2.0"
] | feature-extraction | false | google | null | google/tapas-small | 405 | null | transformers | 2,535 | ---
language: en
tags:
- tapas
- TapasModel
license: apache-2.0
---
# TAPAS small model
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `revision="no_reset"`, which corresponds to `tapas_inter_masklm_small`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then
jointly train these randomly initialized classification heads with the base model on a downstream task.
## Intended uses & limitations
You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Pre-training
The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details.
The optimizer used is Adam with a learning rate of 5e-5, and a warmup
ratio of 0.01.
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ionite/DialoGPT-large-Sh0rtiAI-v2 | 83acb3785e6f4c0d6aa99d885a8970ac522ca556 | 2021-12-02T15:38:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ionite | null | ionite/DialoGPT-large-Sh0rtiAI-v2 | 405 | null | transformers | 2,536 | ---
tags:
- conversational
---
# Sh0rtiAI v2 DialoGPT Model |
EMBEDDIA/sloberta | 49db152a3bea54016773be3cce126a9b5ece88da | 2021-11-24T13:46:22.000Z | [
"pytorch",
"camembert",
"fill-mask",
"sl",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | EMBEDDIA | null | EMBEDDIA/sloberta | 404 | 1 | transformers | 2,537 | ---
language:
- sl
license: cc-by-sa-4.0
---
# Usage
Load in transformers library with:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/sloberta")
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/sloberta")
```
# SloBERTa
SloBERTa model is a monolingual Slovene BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The corpora used for training the model have 3.47 billion tokens in total. The subword vocabulary contains 32,000 tokens. The scripts and programs used for data preparation and training the model are available on https://github.com/clarinsi/Slovene-BERT-Tool
SloBERTa was trained for 200,000 iterations or about 98 epochs.
## Corpora
The following corpora were used for training the model:
* Gigafida 2.0
* Kas 1.0
* Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora)
* Slovenian parliamentary corpus siParl 2.0
* slWaC
|
jonatasgrosman/wav2vec2-xls-r-1b-english | 26ddad7c80aa4b7b5799540bd6c4c46c52c36789 | 2022-07-27T23:39:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/wav2vec2-xls-r-1b-english | 403 | 5 | transformers | 2,538 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R Wav2Vec2 English by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
config: en
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 21.05
- name: Test CER
type: cer
value: 8.44
- name: Test WER (+LM)
type: wer
value: 17.31
- name: Test CER (+LM)
type: cer
value: 7.77
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: en
metrics:
- name: Dev WER
type: wer
value: 20.53
- name: Dev CER
type: cer
value: 9.31
- name: Dev WER (+LM)
type: wer
value: 17.7
- name: Dev CER (+LM)
type: cer
value: 8.93
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: en
metrics:
- name: Test WER
type: wer
value: 17.88
---
# Fine-tuned XLS-R 1B model for speech recognition in English
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on English using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual LibriSpeech](https://www.openslr.org/94/), [TED-LIUMv3](https://www.openslr.org/51/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-english")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-english"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-english --dataset mozilla-foundation/common_voice_8_0 --config en --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-english --dataset speech-recognition-community-v2/dev_data --config en --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-english,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {E}nglish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-english}},
year={2022}
}
```
|
veb/twitch-bert-base-cased-pytorch | d611ffd8bdcf38b9268516638f959eb933480c21 | 2022-06-26T20:10:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | veb | null | veb/twitch-bert-base-cased-pytorch | 403 | null | transformers | 2,539 | Entry not found |
GamerMan02/DialoGPT-medium-gamerbot | 9fcfcfb417f37dbc0cac4d096d15cb00783d02d2 | 2021-09-22T00:52:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | GamerMan02 | null | GamerMan02/DialoGPT-medium-gamerbot | 402 | null | transformers | 2,540 | ---
tags:
- conversational
---
# Gamer Bot DialoGPT Model |
google/realm-cc-news-pretrained-encoder | 6dfb09adaa0c6f7f4b5df1ee4a6cff0fea111ad2 | 2022-01-06T06:25:03.000Z | [
"pytorch",
"realm",
"en",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/realm-cc-news-pretrained-encoder | 402 | null | transformers | 2,541 | ---
language: en
license: apache-2.0
---
# realm-cc-news-pretrained-encoder
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmKnowledgeAugEncoder
encoder = RealmKnowledgeAugEncoder.from_pretrained("qqaatw/realm-cc-news-pretrained-encoder")
```
|
monologg/koelectra-base-finetuned-naver-ner | f30657e8e2330e1010ac9cf37e970afe4349c36b | 2020-05-13T03:51:43.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | monologg | null | monologg/koelectra-base-finetuned-naver-ner | 402 | null | transformers | 2,542 | Entry not found |
raynardj/classical-chinese-punctuation-guwen-biaodian | 2f8c9ca2293102348c5d4dd2f4c95c6b760a135a | 2021-11-29T14:39:52.000Z | [
"pytorch",
"bert",
"token-classification",
"zh",
"transformers",
"ner",
"punctuation",
"古文",
"文言文",
"ancient",
"classical",
"autotrain_compatible"
] | token-classification | false | raynardj | null | raynardj/classical-chinese-punctuation-guwen-biaodian | 402 | 3 | transformers | 2,543 | ---
language:
- zh
tags:
- ner
- punctuation
- 古文
- 文言文
- ancient
- classical
widget:
- text: "郡邑置夫子庙于学以嵗时释奠盖自唐贞观以来未之或改我宋有天下因其制而损益之姑苏当浙右要区规模尤大更建炎戎马荡然无遗虽修学宫于荆榛瓦砾之余独殿宇未遑议也每春秋展礼于斋庐已则置不问殆为阙典今寳文阁直学士括苍梁公来牧之明年实绍兴十有一禩也二月上丁修祀既毕乃愓然自咎揖诸生而告之曰天子不以汝嘉为不肖俾再守兹土顾治民事神皆守之职惟是夫子之祀教化所基尤宜严且谨而拜跪荐祭之地卑陋乃尔其何以掲防妥灵汝嘉不敢避其责曩常去此弥年若有所负尚安得以罢輭自恕复累后人乎他日或克就绪愿与诸君落之于是谋之僚吏搜故府得遗材千枚取赢资以给其费鸠工庀役各举其任嵗月讫工民不与知像设礼器百用具修至于堂室廊序门牖垣墙皆一新之"
---
# Classical Chinese Punctuation
> 欢迎前往[我的github文言诗词项目页面探讨、加⭐️ ](https://github.com/raynardj/yuan), Please check the github repository for more about the [model, hit 🌟 if you like](https://github.com/raynardj/yuan)
* This model punctuates Classical(ancient) Chinese, you might feel strange about this task, but **many of my ancestors think writing articles without punctuation is brilliant idea** 🧐. What we have here are articles from books, letters or carved on stones where you can see no punctuation, just a long string of characters. As you can guess, NLP tech is usually a good tool to tackle this problem, and the entire pipeline can be borrowed from usual **NER task**.
* Since there are also many articles are punctuated, hence with some regex operations, labeled data is more than abundant 📚. That's why this problem is pretty much a low hanging fruit.
* so I guess who's interested in the problem set can speak at least modern Chinese, hence... let me continue the documentation in Chinese.
# 文言文(古文) 断句模型
> 输入一串未断句文言文, 可以断句, 目前支持二十多种标点符号
## 其他文言诗词的资源
* [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan)
* [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn)
* [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
* [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)
* [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian)
* [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry) |
Kargan/DialoGPT-small-randombot | f096b26cf579dbcaa641933b8dd710629165ac7a | 2021-08-29T12:13:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kargan | null | Kargan/DialoGPT-small-randombot | 401 | null | transformers | 2,544 | ---
tags:
- conversational
---
# randombot DialoGPT Model |
bhadresh-savani/distilbert-base-uncased-go-emotion | b6878efeb9579da3a30627af1502446a99612acd | 2021-11-30T08:19:23.000Z | [
"pytorch",
"distilbert",
"en",
"dataset:go_emotions",
"transformers",
"text-classification",
"go-emotion",
"license:apache-2.0"
] | text-classification | false | bhadresh-savani | null | bhadresh-savani/distilbert-base-uncased-go-emotion | 401 | null | transformers | 2,545 | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- go-emotion
- pytorch
license: apache-2.0
datasets:
- go_emotions
metrics:
- Accuracy
---
# Distilbert-Base-Uncased-Go-Emotion
## Model description:
**Not working fine**
## Training Parameters:
```
Num Epochs = 3
Instantaneous batch size per device = 32
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 15831
```
## TrainOutput:
```
'train_loss': 0.105500
```
## Evalution Output:
```
'eval_accuracy_thresh': 0.962023913860321,
'eval_loss': 0.11090277135372162,
```
## Colab Notebook:
[Notebook](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/go_emotion_of_transformers_multilabel_text_classification_v2.ipynb) |
matthewburke/korean_sentiment | 87ed5cf5710f0edd497b22057d37fbdb23a1ed12 | 2022-01-16T02:31:37.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | matthewburke | null | matthewburke/korean_sentiment | 401 | 1 | transformers | 2,546 | ```
from transformers import pipeline
classifier = pipeline("text-classification", model="matthewburke/korean_sentiment")
custom_tweet = "영화 재밌다."
preds = classifier(custom_tweet, return_all_scores=True)
is_positive = preds[0][1]['score'] > 0.5
```
|
textattack/bert-base-uncased-WNLI | 78c56d02e40a1896f7d025185d2f9cbe0bd26c08 | 2021-05-20T07:39:22.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/bert-base-uncased-WNLI | 400 | null | transformers | 2,547 | ## TextAttack Model Card
This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.5633802816901409, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
af1tang/personaGPT | 7b13971b89d90dd252d0b372e2c87d4b2d16ccb7 | 2021-09-07T02:44:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:1801.07243",
"transformers",
"conversational",
"license:gpl-3.0"
] | conversational | false | af1tang | null | af1tang/personaGPT | 399 | 7 | transformers | 2,548 | ---
tags:
- conversational
license: gpl-3.0
---
## A conversational agent with many personalities (PersonaGPT)
PersonaGPT is an open-domain conversational agent designed to do 2 tasks:
1. decoding _personalized_ responses based on input personality facts (the "persona" profile of the bot).
2. incorporating _turn-level goals_ into its responses through "action codes" (e.g., "talk about work", "ask about favorite music").
It builds on the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) pretrained model based on the [GPT-2](https://github.com/openai/gpt-2) architecture.
This model is trained on the [Persona-Chat](https://arxiv.org/pdf/1801.07243) dataset, with added special tokens to better distinguish between conversational history and personality traits for dyadic conversations. Furthermore, some active learning was used to train the model to do _controlled_ decoding using turn-level goals.
## Full Repo
Preprocessing, training and implementation details can be found in the [personaGPT repo](https://github.com/af1tang/personaGPT).
### How to Use
1. Load the model and define some helper functions.
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
tokenizer = AutoTokenizer.from_pretrained("af1tang/personaGPT")
model = AutoModelForCausalLM.from_pretrained("af1tang/personaGPT")
if torch.cuda.is_available():
model = model.cuda()
## utility functions ##
flatten = lambda l: [item for sublist in l for item in sublist]
def to_data(x):
if torch.cuda.is_available():
x = x.cpu()
return x.data.numpy()
def to_var(x):
if not torch.is_tensor(x):
x = torch.Tensor(x)
if torch.cuda.is_available():
x = x.cuda()
return x
def display_dialog_history(dialog_hx):
for j, line in enumerate(dialog_hx):
msg = tokenizer.decode(line)
if j %2 == 0:
print(">> User: "+ msg)
else:
print("Bot: "+msg)
print()
def generate_next(bot_input_ids, do_sample=True, top_k=10, top_p=.92,
max_length=1000, pad_token=tokenizer.eos_token_id):
full_msg = model.generate(bot_input_ids, do_sample=True,
top_k=top_k, top_p=top_p,
max_length=max_length, pad_token_id=tokenizer.eos_token_id)
msg = to_data(full_msg.detach()[0])[bot_input_ids.shape[-1]:]
return msg
```
2. Give your chatbot partner a set of personalities.
```python
# get personality facts for conversation
personas = []
for i in range(3):
response = input(">> Fact %d: "%(i+1))+ tokenizer.eos_token
personas.append(response)
personas = tokenizer.encode(''.join(['<|p2|>'] + personas + ['<|sep|>'] + ['<|start|>']))
```
3. The first use of PersonaGPT is to do _personalized_ dialog generation. Use the following loop to interact with the model.
```python
# converse for 8 turns
dialog_hx = []
for step in range(8):
# encode the user input
user_inp = tokenizer.encode(input(">> User: ") + tokenizer.eos_token)
# append to the chat history
dialog_hx.append(user_inp)
# generated a response while limiting the total chat history to 1000 tokens,
bot_input_ids = to_var([personas + flatten(dialog_hx)]).long()
msg = generate_next(bot_input_ids)
dialog_hx.append(msg)
print("Bot: {}".format(tokenizer.decode(msg, skip_special_tokens=True)))
```
Example of personalized decoding:
| | Persona Facts |
|---|---|
| 1 | i am 64 years old but feel quite young. |
| 2 | i try to eat healthy but limit mcdonalds to once a week. |
| 3 | i regret working as a doctor for the last 20 years. |
| 4 | my secret hobby is making self-help youtube videos. |
| 5 | i've been to spain many times and wish i could go again. |
| Role | Response|
|---|---|
| Human | yo whats good fam? |
| Bot | hello, how are you doing? |
| Human | not bad what you doing today?|
| Bot | eating a lot healthier now that i'm in my 60s |
| Human | i'm more of a fast food guy |
| Bot | me too but i'm very healthy so i try to avoid it |
| Human | what do you do for fun? |
| Bot | i make videos for fun and eat very healthy |
| Human | nice what kind of videos? |
| Bot | i love making videos about self help stuff |
| Human | i think i need some help |
| Bot | what kind of help would you like to get |
| Human | i'm 21 and feel lost in life |
| Bot | i can relate, i'm only 65 |
4. The second use of PersonaGPT is to do _controlled response generation_. Use the following program to interact with the model.
```python
## available actions ##
action_space = [ 'ask about kids.', "ask about pets.", 'talk about work.',
'ask about marital status.', 'talk about travel.', 'ask about age and gender.',
'ask about hobbies.', 'ask about favorite food.', 'talk about movies.',
'talk about music.', 'talk about politics.']
# converse for 8 turns
dialog_hx = []
for step in range(8):
# choose an action
act = None
while act not in action_space:
display_dialog_history(dialog_hx)
print()
print(" actions: ")
for k,v in enumerate(action_space): print(k,v)
try:
act = action_space[int(input(" input [0-10]: " ))]
except:
act = None
print()
# format into prefix code
action_prefix = tokenizer.encode(''.join(['<|act|> '] + [act] + ['<|p1|>'] + [] + ['<|sep|>'] + ['<|start|>']))
bot_input_ids = to_var([action_prefix + flatten(dialog_hx)]).long()
# generate query conditioned on action
msg = generate_next(bot_input_ids, top_k=top_k, top_p=top_p, max_length=max_length)
dialog_hx.append(msg)
# generate bot response
bot_input_ids = to_var([personas+ flatten(dialog_hx)]).long()
msg = generate_next(bot_input_ids, top_k=top_k, top_p=top_p, max_length=max_length)
dialog_hx.append(msg)
display_dialog_history(dialog_hx)
```
Example of controlled response generation:
| | Bot 2's Persona Facts |
|---|---|
| 1 | i am a quiet engineer. |
| 2 | i'm single and am looking for love. |
| 3 | sadly, i don't have any relatable hobbies. |
| 4 | luckily, however, i am tall and athletic. |
| 5 | on friday nights, i watch re-runs of the simpsons alone. |
| Role | Action | Response |
|---|---|---|
| Bot 1 | talk about movies | hello do you like movies? |
| Bot 2 | - | i love the simpsons. how about you? |
| Bot 1 | talk about music | i like the walking dead. do you play an instrument? |
| Bot 2 | - | yes i play the violin. |
| Bot 1 | ask about marital status | are you married or single? |
| Bot 2 | - | i am single but i am looking for love. |
| Bot 1 | talk about work | my job is working in a factory. what's your occupation? |
| Bot 2 | - | engineer. i'm very quiet so no one hears me. |
| Bot 1 | talk about hobbies | do you have any hobbies? |
| Bot 2 | - | i watch reruns of the simpsons. |
| Bot 1 | ask about favorite food | what's your favorite food? |
| Bot 2 | - | i love pizza. how about yourself? |
| Bot 1 | ask about pets | i also love pizza. do you like animals? |
| Bot 2 | - | i have two dogs. what is your occupation? |
| Bot 1 | talk about work | i'm a factory worker. what's your dream job? |
| Bot 2 | - | i'd love to be a writer one day. | |
pedropei/aspect-level-certainty | 746a61d07fd783be87b474dfdc76594caf6c125d | 2021-09-29T05:44:59.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | pedropei | null | pedropei/aspect-level-certainty | 399 | null | transformers | 2,549 | Entry not found |
uer/roberta-medium-word-chinese-cluecorpussmall | f001eef673cf1ac1908a23944f8309ffbeb22bd9 | 2022-02-19T15:58:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/roberta-medium-word-chinese-cluecorpussmall | 399 | 1 | transformers | 2,550 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "最近一趟去北京的[MASK]几点发车"
---
# Chinese word-based RoBERTa Miniatures
## Model description
This is the set of 5 Chinese word-based RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
Most Chinese pre-trained weights are based on Chinese character. Compared with character-based models, word-based models are faster (because of shorter sequence length) and have better performance according to our experimental results. To this end, we released the 5 Chinese word-based RoBERTa models of different sizes. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
Notice that the output results of Hosted inference API (right) are not properly displayed. When the predicted word has multiple characters, the single word instead of entire sentence is displayed. One can click **JSON Output** for normal output results.
You can download the 5 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **word-based RoBERTa-Tiny** | [**L=2/H=128 (Tiny)**][2_128] |
| **word-based RoBERTa-Mini** | [**L=4/H=256 (Mini)**][4_256] |
| **word-based RoBERTa-Small** | [**L=4/H=512 (Small)**][4_512] |
| **word-based RoBERTa-Medium** | [**L=8/H=512 (Medium)**][8_512] |
| **word-based RoBERTa-Base** | [**L=12/H=768 (Base)**][12_768] |
Compared with [char-based models](https://huggingface.co/uer/chinese_roberta_L-2_H-128), word-based models achieve better results in most cases. Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny(char) | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| **RoBERTa-Tiny(word)** | **74.3(+2.0)** | **86.4** | **93.2** | **82.0** | **66.4** | **58.2** | **59.6** |
| RoBERTa-Mini(char) | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| **RoBERTa-Mini(word)** | **76.7(+1.0)** | **87.6** | **94.1** | **85.4** | **66.9** | **59.2** | **67.3** |
| RoBERTa-Small(char) | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| **RoBERTa-Small(word)** | **78.1(+1.3)** | **88.5** | **94.7** | **87.4** | **67.6** | **60.9** | **69.8** |
| RoBERTa-Medium(char) | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| **RoBERTa-Medium(word)** | **78.9(+1.1)** | **89.2** | **95.1** | **88.0** | **67.8** | **60.6** | **73.0** |
| RoBERTa-Base(char) | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
| **RoBERTa-Base(word)** | **80.2(+0.7)** | **90.3** | **95.7** | **89.4** | **68.0** | **61.5** | **76.8** |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of word-based RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-medium-word-chinese-cluecorpussmall')
>>> unmasker("[MASK]的首都是北京。")
[
{'sequence': '中国 的首都是北京。',
'score': 0.21525809168815613,
'token': 2873,
'token_str': '中国'},
{'sequence': '北京 的首都是北京。',
'score': 0.15194718539714813,
'token': 9502,
'token_str': '北京'},
{'sequence': '我们 的首都是北京。',
'score': 0.08854265511035919,
'token': 4215,
'token_str': '我们'},
{'sequence': '美国 的首都是北京。',
'score': 0.06808705627918243,
'token': 7810,
'token_str': '美国'},
{'sequence': '日本 的首都是北京。',
'score': 0.06071401759982109,
'token': 7788,
'token_str': '日本'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, BertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-medium-word-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-medium-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFBertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-medium-word-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-medium-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
Since BertTokenizer does not support sentencepiece, AlbertTokenizer is used here.
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. Google's [sentencepiece](https://github.com/google/sentencepiece) is used for word segmentation. The sentencepiece model is trained on CLUECorpusSmall corpus:
```
>>> import sentencepiece as spm
>>> spm.SentencePieceTrainer.train(input='cluecorpussmall.txt',
model_prefix='cluecorpussmall_spm',
vocab_size=100000,
max_sentence_length=1024,
max_sentencepiece_length=6,
user_defined_symbols=['[MASK]','[unused1]','[unused2]',
'[unused3]','[unused4]','[unused5]','[unused6]',
'[unused7]','[unused8]','[unused9]','[unused10]'],
pad_id=0,
pad_piece='[PAD]',
unk_id=1,
unk_piece='[UNK]',
bos_id=2,
bos_piece='[CLS]',
eos_id=3,
eos_piece='[SEP]',
train_extremely_large_corpus=True
)
```
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of word-based RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--spm_model_path models/cluecorpussmall_spm.model \
--dataset_path cluecorpussmall_word_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
--spm_model_path models/cluecorpussmall_spm.model \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--spm_model_path models/cluecorpussmall_spm.model \
--dataset_path cluecorpussmall_word_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq512_dataset.pt \
--spm_model_path models/cluecorpussmall_spm.model \
--pretrained_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_word_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/roberta-tiny-word-chinese-cluecorpussmall
[4_256]:https://huggingface.co/uer/roberta-mini-word-chinese-cluecorpussmall
[4_512]:https://huggingface.co/uer/roberta-small-word-chinese-cluecorpussmall
[8_512]:https://huggingface.co/uer/roberta-medium-word-chinese-cluecorpussmall
[12_768]:https://huggingface.co/uer/roberta-base-word-chinese-cluecorpussmall |
GanjinZero/coder_all | 025b05d653a9e637cd708a68673f5edea92113a4 | 2022-04-25T02:20:33.000Z | [
"pytorch",
"bert",
"feature-extraction",
"en",
"transformers",
"biomedical",
"license:apache-2.0"
] | feature-extraction | false | GanjinZero | null | GanjinZero/coder_all | 398 | 1 | transformers | 2,551 | ---
language:
- en
license: apache-2.0
tags:
- bert
- biomedical
---
CODER: Knowledge infused cross-lingual medical term embedding for term normalization.
Multi lingual Version.
```
@article{YUAN2022103983,
title = {CODER: Knowledge-infused cross-lingual medical term embedding for term normalization},
journal = {Journal of Biomedical Informatics},
pages = {103983},
year = {2022},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2021.103983},
url = {https://www.sciencedirect.com/science/article/pii/S1532046421003129},
author = {Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu},
keywords = {medical term normalization, cross-lingual, medical term representation, knowledge graph embedding, contrastive learning}
}
``` |
deepmind/vision-perceiver-fourier | 6900e2e595f397e6d7955db5234e13f2ad6ed67b | 2021-12-11T13:13:25.000Z | [
"pytorch",
"perceiver",
"image-classification",
"dataset:imagenet",
"arxiv:2107.14795",
"transformers",
"license:apache-2.0"
] | image-classification | false | deepmind | null | deepmind/vision-perceiver-fourier | 398 | 1 | transformers | 2,552 | ---
license: apache-2.0
tags:
datasets:
- imagenet
---
# Perceiver IO for vision (fixed Fourier position embeddings)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds fixed Fourier 2D position embeddings to the pixel values.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationFourier
import requests
from PIL import Image
feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-fourier")
model = PerceiverForImageClassificationFourier.from_pretrained("deepmind/vision-perceiver-fourier")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare input
inputs = feature_extractor(image, return_tensors="pt").pixel_values
# forward pass
outputs = model(inputs)
logits = outputs.logits
print("Predicted class:", model.config.id2label[logits.argmax(-1).item()])
>>> should print Predicted class: tabby, tabby cat
```
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
### Pretraining
Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
This model is able to achieve a top-1 accuracy of 79.0 on ImageNet-1k, and 84.5 when pre-trained on a large-scale dataset (JFT-300M, an internal dataset of Google).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-2 | a672eba7b1d471999106c564152c0364c7e25d35 | 2022-06-30T02:25:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-2 | 398 | 1 | transformers | 2,553 | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-2
This model is a fine-tuned version of [gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1](https://huggingface.co/gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2691
- Wer: 0.0910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2664 | 1.0 | 8969 | 0.3347 | 0.1645 |
| 0.2032 | 2.0 | 17938 | 0.3170 | 0.1662 |
| 0.1888 | 3.0 | 26907 | 0.3188 | 0.1317 |
| 0.1774 | 4.0 | 35876 | 0.2885 | 0.1195 |
| 0.0696 | 5.0 | 44845 | 0.2703 | 0.1105 |
| 0.254 | 6.0 | 53814 | 0.2817 | 0.0972 |
| 0.0464 | 7.0 | 62783 | 0.2691 | 0.0910 |
| 0.0426 | 8.0 | 71752 | 0.3033 | 0.0875 |
| 0.035 | 9.0 | 80721 | 0.3150 | 0.0841 |
| 0.0274 | 10.0 | 89690 | 0.3073 | 0.0816 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
mrm8488/bert2bert-spanish-question-generation | a56aa478647ae3a126cf20fc0a232ed74a2e7f9c | 2021-04-24T16:18:25.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"es",
"transformers",
"spanish",
"question",
"generation",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/bert2bert-spanish-question-generation | 397 | 5 | transformers | 2,554 |
---
language: es
tags:
- spanish
- question
- generation
widget:
- text: "Manuel vive en Murcia, España"
---
# Spanish Bert2Bert fine-tuned on SQuaD (es) for question generation |
sentence-transformers/msmarco-roberta-base-ance-firstp | 236d631f5a757abdb6ce28471648d964d9acb8f3 | 2022-06-15T23:03:19.000Z | [
"pytorch",
"tf",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/msmarco-roberta-base-ance-firstp | 397 | null | sentence-transformers | 2,555 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# sentence-transformers/msmarco-roberta-base-ance-firstp
This is a port of the [ANCE FirstP Model](https://github.com/microsoft/ANCE/) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/msmarco-roberta-base-ance-firstp')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-roberta-base-ance-firstp)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): LayerNorm(
(norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
)
)
```
## Citing & Authors
Have a look at: [ANCE Model](https://github.com/microsoft/ANCE/) |
typeform/squeezebert-mnli | 040a407be7c792333d387d7c5d7ba2d3d1271256 | 2021-02-13T19:41:31.000Z | [
"pytorch",
"squeezebert",
"en",
"dataset:mulit_nli",
"transformers",
"zero-shot-classification"
] | zero-shot-classification | false | typeform | null | typeform/squeezebert-mnli | 397 | 1 | transformers | 2,556 | ---
language: en
pipeline_tag: zero-shot-classification
tags:
- squeezebert
datasets:
- mulit_nli
metrics:
- accuracy
---
# SqueezeBERT |
facebook/data2vec-audio-large-960h | 27aba26eed532b86dcd0f17284a0307de4b51f39 | 2022-06-06T10:36:59.000Z | [
"pytorch",
"data2vec-audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"transformers",
"speech",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | facebook | null | facebook/data2vec-audio-large-960h | 397 | 5 | transformers | 2,557 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: data2vec-audio-large-960h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.89
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.07
---
# Data2Vec-Audio-Large-960h
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The large model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecAudioForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-large-960h")
model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-large-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/data2vec-audio-large-960h** on LibriSpeech's "clean" and "other" test data.
```python
from transformers import Wav2Vec2Processor, Data2VecAudioForCTC
from datasets import load_dataset
import torch
from jiwer import wer
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-large-960h").to("cuda")
model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-large-960h")
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.89 | 4.07 | |
Unbabel/gec-t5_small | c958d53bfbce19c87342b69fc6bcaba7303d076f | 2021-09-27T11:27:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:clang-8",
"dataset:conll-14",
"dataset:conll-13",
"arxiv:2106.03830",
"transformers",
"grammatical error correction",
"text2text",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | Unbabel | null | Unbabel/gec-t5_small | 396 | 9 | transformers | 2,558 | ---
language:
- en
tags:
- grammatical error correction
- text2text
- t5
license: apache-2.0
datasets:
- clang-8
- conll-14
- conll-13
metrics:
- f0.5
---
This model is an implementation of the paper [A Simple Recipe for Multilingual Grammatical Error Correction](https://arxiv.org/pdf/2106.03830.pdf) from Google where they report the State of the art score in the task of Grammatical Error Correction (GEC).
We implement the version with the T5-small with the reported F_0.5 score in the paper (60.70).
To effectively use the "Hosted inference API", write "gec: [YOUR SENTENCE HERE]".
In order to use the model, look at the following snippet:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("Unbabel/gec-t5_small")
tokenizer = T5Tokenizer.from_pretrained('t5-small')
sentence = "I like to swimming"
tokenized_sentence = tokenizer('gec: ' + sentence, max_length=128, truncation=True, padding='max_length', return_tensors='pt')
corrected_sentence = tokenizer.decode(
model.generate(
input_ids = tokenized_sentence.input_ids,
attention_mask = tokenized_sentence.attention_mask,
max_length=128,
num_beams=5,
early_stopping=True,
)[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)
print(corrected_sentence) # -> I like swimming.
``` |
aryanbhosale/smartharrypotterbot | bd43c31a27c9610a9ec2e3125a460c14f4f60093 | 2022-03-04T15:41:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | aryanbhosale | null | aryanbhosale/smartharrypotterbot | 396 | null | transformers | 2,559 | |
albert-large-v1 | 17aeb59edfd7c7732fb96248a95f5c271a9fa28f | 2021-01-13T15:29:06.000Z | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | albert-large-v1 | 394 | null | transformers | 2,560 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Large v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 1024 hidden dimension
- 16 attention heads
- 17M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v1')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1')
model = AlbertModel.from_pretrained("albert-large-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1')
model = TFAlbertModel.from_pretrained("albert-large-v1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-large-v1')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
microsoft/prophetnet-large-uncased-cnndm | e827ade610c20cb825e0734d9f9b95fd4558b9e1 | 2021-01-17T13:15:58.000Z | [
"pytorch",
"rust",
"prophetnet",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"arxiv:2001.04063",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | microsoft | null | microsoft/prophetnet-large-uncased-cnndm | 394 | null | transformers | 2,561 | ---
language: en
datasets:
- cnn_dailymail
---
## prophetnet-large-uncased-cnndm
Fine-tuned weights(converted from [original fairseq version repo](https://github.com/microsoft/ProphetNet)) for [ProphetNet](https://arxiv.org/abs/2001.04063) on summarization task CNN/DailyMail.
ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction.
ProphetNet is able to predict more future tokens with a n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet).
### Usage
```
from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig
model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/prophetnet-large-uncased-cnndm')
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased-cnndm')
ARTICLE_TO_SUMMARIZE = "USTC was founded in Beijing by the Chinese Academy of Sciences (CAS) in September 1958. The Director of CAS, Mr. Guo Moruo was appointed the first president of USTC. USTC's founding mission was to develop a high-level science and technology workforce, as deemed critical for development of China's economy, defense, and science and technology education. The establishment was hailed as \"A Major Event in the History of Chinese Education and Science.\" CAS has supported USTC by combining most of its institutes with the departments of the university. USTC is listed in the top 16 national key universities, becoming the youngest national key university.".lower()
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=100, return_tensors='pt')
# Generate Summary
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True)
# should give: 'ustc was founded in beijing by the chinese academy of sciences in 1958. [X_SEP] ustc\'s mission was to develop a high - level science and technology workforce. [X_SEP] the establishment was hailed as " a major event in the history of chinese education and science "'
```
Here, [X_SEP] is used as a special token to seperate sentences.
### Citation
```bibtex
@article{yan2020prophetnet,
title={Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training},
author={Yan, Yu and Qi, Weizhen and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming},
journal={arXiv preprint arXiv:2001.04063},
year={2020}
}
```
|
mrm8488/t5-base-finetuned-break_data | 07edfcf30acbd71b19c849029bd1cd7990fe6a37 | 2021-10-20T08:31:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:break_data",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-break_data | 394 | 1 | transformers | 2,562 | ---
language: en
datasets:
- break_data
widget:
- text: "paraphrase: The composer of Sands Theme plays what type of guitar?"
---
# T5-base fine-tuned on break_data / QDMR-high-level ❓➡️📋
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [break_data](https://huggingface.co/nlp/viewer/?dataset=break_data&config=QDMR-high-level) dataset for **QDMRs**.
## Details of T5 📜 ➡️ 📜
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (QDMRs) - Dataset 📚
Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| break_data | train | 17503 |
| break_data | valid | 3130 |
Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/)
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The main change is at preprocessing ```inputs``` and ```targets``` we feed to the model. We do it as a *paraphrasing task*.
## Model in Action 🚀
```python
# Tip: By now, install transformers from source
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-break_data")
model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-break_data")
def get_decomposition(question):
input_text = "paraphrase: %s </s>" % question
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=32)
return tokenizer.decode(output[0])
question = "The composer of Sands Theme plays what type of guitar?"
get_decomposition(question)
# output: 'return Sands Theme ;return composer of #1 ;return guitar that #2 plays'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
ArthurZ/jukebox-1b-lyrics | a0fe6d15e4a6de89f6447b1ae4e0ca5d16647b63 | 2022-07-19T09:40:53.000Z | [
"pytorch",
"jukebox",
"arxiv:2005.00341",
"transformers",
"MusicGeneration"
] | null | false | ArthurZ | null | ArthurZ/jukebox-1b-lyrics | 394 | 3 | transformers | 2,563 | ---
tags:
- MusicGeneration
- jukebox
---
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Jukebox
## Overview
The Jukebox model was proposed in [Jukebox: A generative model for music](https://arxiv.org/pdf/2005.00341.pdf)
by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford,
Ilya Sutskever.
This model proposes a generative music model which can be produce minute long samples which can bne conditionned on
artist, genre and lyrics.
The abstract from the paper is the following:
We introduce Jukebox, a model that generates
music with singing in the raw audio domain. We
tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes,
and modeling those using autoregressive Transformers. We show that the combined model at
scale can generate high-fidelity and diverse songs
with coherence up to multiple minutes. We can
condition on artist and genre to steer the musical
and vocal style, and on unaligned lyrics to make
the singing more controllable. We are releasing
thousands of non cherry-picked samples, along
with model weights and code.
Tips:
This model is very slow for now, and takes 18h to generate a minute long audio.
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ).
The original code can be found [here](https://github.com/openai/jukebox).
|
yanekyuk/bert-keyword-extractor | 9aff81f51327c2effea96fb1a928a2e58fd0cfe7 | 2022-06-04T00:51:39.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | yanekyuk | null | yanekyuk/bert-keyword-extractor | 394 | 1 | transformers | 2,564 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- en
widget:
- text: "Broadcom agreed to acquire cloud computing company VMware in a $61 billion (€57bn) cash-and stock deal, massively diversifying the chipmaker’s business and almost tripling its software-related revenue to about 45% of its total sales. By the numbers: VMware shareholders will receive either $142.50 in cash or 0.2520 of a Broadcom share for each VMware stock. Broadcom will also assume $8 billion of VMware's net debt."
- text: "Canadian Natural Resources Minister Jonathan Wilkinson told Bloomberg that the country could start supplying Europe with liquefied natural gas (LNG) in as soon as three years by converting an existing LNG import facility on Canada’s Atlantic coast into an export terminal. Bottom line: Wilkinson said what Canada cares about is that the new LNG facility uses a low-emission process for the gas and is capable of transitioning to exporting hydrogen later on."
- text: "Google is being investigated by the UK’s antitrust watchdog for its dominance in the \"ad tech stack,\" the set of services that facilitate the sale of online advertising space between advertisers and sellers. Google has strong positions at various levels of the ad tech stack and charges fees to both publishers and advertisers. A step back: UK Competition and Markets Authority has also been investigating whether Google and Meta colluded over ads, probing into the advertising agreement between the two companies, codenamed Jedi Blue."
- text: "Shares in Twitter closed 6.35% up after an SEC 13D filing revealed that Elon Musk pledged to put up an additional $6.25 billion of his own wealth to fund the $44 billion takeover deal, lifting the total to $33.5 billion from an initial $27.25 billion. In other news: Former Twitter CEO Jack Dorsey announced he's stepping down, but would stay on Twitter’s board \\“until his term expires at the 2022 meeting of stockholders.\""
model-index:
- name: bert-keyword-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-keyword-extractor
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1341
- Precision: 0.8565
- Recall: 0.8874
- Accuracy: 0.9738
- F1: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.1688 | 1.0 | 1875 | 0.1233 | 0.7194 | 0.7738 | 0.9501 | 0.7456 |
| 0.1219 | 2.0 | 3750 | 0.1014 | 0.7724 | 0.8166 | 0.9606 | 0.7939 |
| 0.0834 | 3.0 | 5625 | 0.0977 | 0.8280 | 0.8263 | 0.9672 | 0.8272 |
| 0.0597 | 4.0 | 7500 | 0.0984 | 0.8304 | 0.8680 | 0.9704 | 0.8488 |
| 0.0419 | 5.0 | 9375 | 0.1042 | 0.8417 | 0.8687 | 0.9717 | 0.8550 |
| 0.0315 | 6.0 | 11250 | 0.1161 | 0.8520 | 0.8839 | 0.9729 | 0.8677 |
| 0.0229 | 7.0 | 13125 | 0.1282 | 0.8469 | 0.8939 | 0.9734 | 0.8698 |
| 0.0182 | 8.0 | 15000 | 0.1341 | 0.8565 | 0.8874 | 0.9738 | 0.8717 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Helsinki-NLP/opus-mt-jap-en | 797a0fff3eda86b8c059bd9b04943accf9e5b19a | 2021-09-10T13:53:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"jap",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-jap-en | 393 | null | transformers | 2,565 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-jap-en
* source languages: jap
* target languages: en
* OPUS readme: [jap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/jap-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/jap-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/jap-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/jap-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.jap.en | 52.6 | 0.703 |
|
cambridgeltl/mirror-roberta-base-sentence | 3689dff45b000d88e959008d0f64a3e420b142d4 | 2021-09-19T22:48:01.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2104.08027",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/mirror-roberta-base-sentence | 393 | null | transformers | 2,566 | ---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-roberta-base-sentence
An unsupervised sentence encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf). The model is trained with unlabelled raw sentences, using [roberta-base](https://huggingface.co/roberta-base) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
### Citation
```bibtex
@inproceedings{
liu2021fast,
title={Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={EMNLP 2021},
year={2021}
}
```
|
Geotrend/bert-base-fr-cased | ab40c6b5f1f03e9fa8d11492b293d3599233f746 | 2021-05-18T19:56:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"fr",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-fr-cased | 392 | null | transformers | 2,567 | ---
language: fr
datasets: wikipedia
license: apache-2.0
widget:
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
---
# bert-base-fr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-fr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
algoprog/mimics-tagging-roberta-base | dba82c959fdddf55ba244a96f242858490b4006b | 2022-02-24T01:14:23.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | algoprog | null | algoprog/mimics-tagging-roberta-base | 392 | null | transformers | 2,568 | Entry not found |
google/pegasus-gigaword | e485c6799ef3d0b03992d364b205e92f6cb67656 | 2021-07-21T21:25:28.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:gigaword",
"arxiv:1912.08777",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | google | null | google/pegasus-gigaword | 392 | null | transformers | 2,569 | ---
language: en
tags:
- summarization
datasets:
- gigaword
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Akito1961/DialoGPT-small-C3PO | 9bc75e4cbbdb460ec0d46986128d159d0ad1cddc | 2022-07-03T10:31:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Akito1961 | null | Akito1961/DialoGPT-small-C3PO | 392 | null | transformers | 2,570 | ---
tags:
- conversational
---
# C3PO DialoGPT Small |
Helsinki-NLP/opus-mt-it-de | db0cf30d9edbe6569fdcc25ff08c03bbcdf9b22f | 2021-09-10T13:52:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-it-de | 391 | null | transformers | 2,571 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-it-de
* source languages: it
* target languages: de
* OPUS readme: [it-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.it.de | 49.4 | 0.678 |
|
cointegrated/roberta-base-formality | 7787b6bde0a58bcb35902a8bbf29909f76b7eea2 | 2021-10-17T17:27:50.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cointegrated | null | cointegrated/roberta-base-formality | 391 | null | transformers | 2,572 | Entry not found |
facebook/convnext-base-224 | eda2970bc74154a2af92300316deecd49f72bea8 | 2022-02-26T12:16:30.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-base-224 | 391 | 2 | transformers | 2,573 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (base-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-224")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-224")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
facebook/wav2vec2-conformer-rope-large-960h-ft | 92f0f25475761ebe67e8e617c99450f04ff68c37 | 2022-06-15T08:12:26.000Z | [
"pytorch",
"wav2vec2-conformer",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-conformer-rope-large-960h-ft | 391 | 3 | transformers | 2,574 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-conformer-rel-pos-large-960h-ft
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.96
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.98
---
# Wav2Vec2-Conformer-Large-960h with Rotary Position Embeddings
Wav2Vec2 Conformer with rotary position embeddings, pretrained and **fine-tuned on 960 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-conformer-rope-large-960h-ft** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ConformerForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.96 | 3.98 | |
Kirili4ik/ruDialoGpt3-medium-finetuned-telegram | 6625aee6169f5087a7dccc47ded8906c44478ecf | 2022-01-14T15:33:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"ru-RU",
"transformers",
"conversational"
] | conversational | false | Kirili4ik | null | Kirili4ik/ruDialoGpt3-medium-finetuned-telegram | 390 | 5 | transformers | 2,575 | ---
language:
- ru
- ru-RU
tags:
- conversational
---
### 📝 Description
DialoGPT trained on Russian language and fine tuned on my telegram chat.
This model was created by [sberbank-ai](https://hf.co/sberbank-ai) and trained on Russian forums (see [Grossmend's model](https://hf.co/Grossmend/rudialogpt3_medium_based_on_gpt2)). You can find info about how it has been trained on [habr](https://habr.com/ru/company/icl_services/blog/548244/) (in Russian). I have created a **simple pipeline** and **fine tuned** that model on my own **exported telegram chat** (~30mb json). It is in fact very easy to get the data from telegram and fine tune a model. Therefore, I made a **colab tutorial** for it: https://colab.research.google.com/drive/1fnAVURjyZRK9VQg1Co_-SKUQnRES8l9R?usp=sharing
⚠️ Due to specifics of the data Hosted inference API may not work properly ⚠️
🤗To try it use my [Spaces demo](https://huggingface.co/spaces/Kirili4ik/chat-with-Kirill)🤗
### ❓ How to use with code
```python
# Download model and tokenizer
checkpoint = "Kirili4ik/ruDialoGpt3-medium-finetuned-telegram"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
model.eval()
# util function to get expected len after tokenizing
def get_length_param(text: str, tokenizer) -> str:
tokens_count = len(tokenizer.encode(text))
if tokens_count <= 15:
len_param = '1'
elif tokens_count <= 50:
len_param = '2'
elif tokens_count <= 256:
len_param = '3'
else:
len_param = '-'
return len_param
# util function to get next person number (1/0) for Machine or Human in the dialogue
def get_user_param(text: dict, machine_name_in_chat: str) -> str:
if text['from'] == machine_name_in_chat:
return '1' # machine
else:
return '0' # human
chat_history_ids = torch.zeros((1, 0), dtype=torch.int)
while True:
next_who = input("Who's phrase?\t") #input("H / G?") # Human or GPT
# In case Human
if next_who == "H" or next_who == "Human":
input_user = input("===> Human: ")
# encode the new user input, add parameters and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(f"|0|{get_length_param(input_user, tokenizer)}|" \
+ input_user + tokenizer.eos_token, return_tensors="pt")
# append the new user input tokens to the chat history
chat_history_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1)
if next_who == "G" or next_who == "GPT":
next_len = input("Phrase len? 1/2/3/-\t") #input("Exp. len?(-/1/2/3): ")
# encode the new user input, add parameters and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(f"|1|{next_len}|", return_tensors="pt")
# append the new user input tokens to the chat history
chat_history_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1)
# print(tokenizer.decode(chat_history_ids[-1])) # uncomment to see full gpt input
# save previous len
input_len = chat_history_ids.shape[-1]
# generated a response; PS you can read about the parameters at hf.co/blog/how-to-generate
chat_history_ids = model.generate(
chat_history_ids,
num_return_sequences=1, # use for more variants, but have to print [i]
max_length=512,
no_repeat_ngram_size=3,
do_sample=True,
top_k=50,
top_p=0.9,
temperature = 0.6, # 0 for greedy
mask_token_id=tokenizer.mask_token_id,
eos_token_id=tokenizer.eos_token_id,
unk_token_id=tokenizer.unk_token_id,
pad_token_id=tokenizer.pad_token_id,
device='cpu'
)
# pretty print last ouput tokens from bot
print(f"===> GPT-3: {tokenizer.decode(chat_history_ids[:, input_len:][0], skip_special_tokens=True)}")
``` |
KoboldAI/fairseq-dense-6.7B | 634af26aec93133afc24b6759206dee8aada682f | 2022-02-01T22:51:24.000Z | [
"pytorch",
"xglm",
"text-generation",
"transformers"
] | text-generation | false | KoboldAI | null | KoboldAI/fairseq-dense-6.7B | 390 | 1 | transformers | 2,576 | Entry not found |
microsoft/xprophetnet-large-wiki100-cased-xglue-ntg | c55500c832564c818a332d38597dc542b468d9e1 | 2021-09-06T12:32:22.000Z | [
"pytorch",
"xlm-prophetnet",
"text2text-generation",
"arxiv:2001.04063",
"arxiv:2004.01401",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | microsoft | null | microsoft/xprophetnet-large-wiki100-cased-xglue-ntg | 390 | null | transformers | 2,577 | ## xprophetnet-large-wiki100-cased-xglue-ntg
Cross-lingual version [ProphetNet](https://arxiv.org/abs/2001.04063), pretrained on [wiki100 xGLUE dataset](https://arxiv.org/abs/2004.01401) and finetuned on xGLUE cross-lingual News Titles Generation task.
ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction.
ProphetNet is able to predict more future tokens with a n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet).
xProphetNet is also served as the baseline model for xGLUE cross-lingual natural language generation tasks.
For xGLUE corss-lingual NLG tasks, xProphetNet is finetuned with English data, but inference with both English and other zero-shot language data.
### Usage
A quick usage is like:
```
from transformers import XLMProphetNetTokenizer, XLMProphetNetForConditionalGeneration, ProphetNetConfig
model = XLMProphetNetForConditionalGeneration.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg')
tokenizer = XLMProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg')
EN_SENTENCE = "Microsoft Corporation intends to officially end free support for the Windows 7 operating system after January 14, 2020, according to the official portal of the organization. From that day, users of this system will not be able to receive security updates, which could make their computers vulnerable to cyber attacks."
RU_SENTENCE = "орпорация Microsoft намерена официально прекратить бесплатную поддержку операционной системы Windows 7 после 14 января 2020 года, сообщается на официальном портале организации . С указанного дня пользователи этой системы не смогут получать обновления безопасности, из-за чего их компьютеры могут стать уязвимыми к кибератакам."
ZH_SENTENCE = "根据该组织的官方门户网站,微软公司打算在2020年1月14日之后正式终止对Windows 7操作系统的免费支持。从那时起,该系统的用户将无法接收安全更新,这可能会使他们的计算机容易受到网络攻击。"
inputs = tokenizer([EN_SENTENCE, RU_SENTENCE, ZH_SENTENCE], padding=True, max_length=256, return_tensors='pt')
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=100, early_stopping=True)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True)
# should give:
# 'Microsoft to end Windows 7 free support after January 14, 2020'
# 'Microsoft намерена прекратить бесплатную поддержку Windows 7 после 14 января 2020 года'
# '微软终止对Windows 7操作系统的免费支持'
```
### Citation
```bibtex
@article{yan2020prophetnet,
title={Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training},
author={Yan, Yu and Qi, Weizhen and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming},
journal={arXiv preprint arXiv:2001.04063},
year={2020}
}
```
|
ynie/bart-large-snli_mnli_fever_anli_R1_R2_R3-nli | e9c498acc3d1c26ccabfc9ff96004e0790d16753 | 2020-10-17T02:00:14.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | ynie | null | ynie/bart-large-snli_mnli_fever_anli_R1_R2_R3-nli | 390 | 1 | transformers | 2,578 | Entry not found |
ridhoalattqas/xlrs-best-lm | c950a4d0cf1703936698d32107b00088c38c284b | 2022-06-06T09:13:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ridhoalattqas | null | ridhoalattqas/xlrs-best-lm | 390 | 1 | transformers | 2,579 | ---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian by Ridho
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 21.07
---
Indonesia XLRS model |
flexudy/t5-small-wav2vec2-grammar-fixer | 5b4eca6292599a69adaa3a7b74435757f1d918a6 | 2021-02-16T01:56:40.000Z | [
"pytorch",
"tf"
] | null | false | flexudy | null | flexudy/t5-small-wav2vec2-grammar-fixer | 389 | 10 | null | 2,580 | # flexudy-pipe-question-generation-v2
After transcribing your audio with Wav2Vec2, you might be interested in a post processor.
All paragraphs had at most 128 tokens (separated by white spaces)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = "flexudy/t5-small-wav2vec2-grammar-fixer"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
sent = """GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS"""
input_text = "fix: { " + sent + " } </s>"
input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=256, truncation=True, add_special_tokens=True)
outputs = model.generate(
input_ids=input_ids,
max_length=256,
num_beams=4,
repetition_penalty=1.0,
length_penalty=1.0,
early_stopping=True
)
sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(f"{sentence}")
```
INPUT 1:
```
WHEN ARE YOU COMING TOMORROW I AM ASKING BECAUSE OF THE MONEY YOU OWE ME PLEASE GIVE IT TO ME I AM WAITING YOU HAVE BEEN AVOIDING ME SINCE TWO THOUSAND AND THREE
```
OUTPUT 1:
```
When are you coming tomorrow? I am asking because of the money you owe me, please give it to me. I am waiting. You have been avoiding me since 2003.
```
INPUT 2:
```
GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS
```
OUTPUT 2:
```
Going along Slushy Country Roads and speaking to Damp audiences in Draughty School rooms day after day for a fortnight, he'll have to put in an appearance at some place of worship on Sunday morning and he can come to us immediately afterwards.
```
I strongly recommend improving the performance via further fine-tuning or by training more examples.
- Possible Quick Rule based improvements: Align the transcribed version and the generated version. If the similarity of two words (case-insensitive) vary by more than some threshold based on some similarity metric (e.g. Levenshtein), then keep the transcribed word. |
brianveebee/DialoGPT-medium-bender | 78fcb5c5976dda427a9a4e1639f7e754bee63fb7 | 2022-07-10T00:01:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | brianveebee | null | brianveebee/DialoGPT-medium-bender | 389 | null | transformers | 2,581 | ---
tags:
- conversational
---
# Bender DialoGPT Model |
howey/roberta-large-mnli | fa26dd2471eee9cdb8ba8b179f21d26527104bee | 2021-06-03T14:21:19.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | howey | null | howey/roberta-large-mnli | 388 | null | transformers | 2,582 | Entry not found |
tosin/dialogpt_mwoz | e98e1e0c3b7661816187ddc5a9fd3b97f30db2d3 | 2021-12-02T09:32:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:multi_woz_v22",
"arxiv:2110.06273",
"transformers",
"conversational",
"license:cc-by-4.0"
] | conversational | false | tosin | null | tosin/dialogpt_mwoz | 388 | 2 | transformers | 2,583 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- conversational
- transformers
datasets:
- multi_woz_v22
metrics:
- perplexity
widget:
- text: "I would like to have breakfast."
---
## DialoGPT_MWOZ
This is a fine-tuned model of DialoGPT (medium) on the MultiWOZ v2.2 dataset. It is intended to be used as a conversational system.
The dataset it's trained on is limited in scope, as it covers only certain domains such as restaurants, hotel, taxi, train, hospital and police.
The perplexity achieved on the test and validation sets are 10.25 and 10.71, respectively. More information about the original pre-trained model can be found [here](https://huggingface.co/microsoft/DialoGPT-medium)
* Generation example from an interactive environment:
|Role | Response |
|---------|------------|
|User | I would like to have breakfast. |
|Bot | There are several options for Chinese in the city centre. Are you looking for a particular price range? |
|User | I'm looking for free food |
|Bot | There are 4 Chinese restaurants that are in the cheap price range. 3 in centre and one in north. |
|User | Not good enough |
|Bot | How about Charlie Chan? It is a Chinese restaurant in the centre of town. |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
The paper for this work was accepted at the Northern Lights Deep Learning (NLDL) conference 2022. Arxiv paper: [https://arxiv.org/pdf/2110.06273.pdf](https://arxiv.org/pdf/2110.06273.pdf)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_mwoz")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_mwoz")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT_MWOZ_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16 | d9bd2536b238ea27b08fb502dc8586933c160453 | 2021-09-24T07:46:55.000Z | [
"pytorch",
"jax",
"en",
"dataset:PubMed",
"transformers",
"bert",
"bluebert",
"license:cc0-1.0"
] | null | false | bionlp | null | bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16 | 387 | null | transformers | 2,584 | ---
language:
- en
tags:
- bert
- bluebert
license: cc0-1.0
datasets:
- PubMed
---
# BlueBert-Base, Uncased, PubMed
## Model description
A BERT model pre-trained on PubMed abstracts.
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-large-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
|
nateraw/vit-age-classifier | f5629497debbd1a543fa49e0a86cadd145d4947d | 2021-05-24T03:09:01.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:fairface",
"transformers"
] | image-classification | false | nateraw | null | nateraw/vit-age-classifier | 387 | 1 | transformers | 2,585 | ---
tags:
- image-classification
- pytorch
datasets:
- fairface
---
# ViT For Age Classification
A vision transformer finetuned to classify the age of a given person's face.
## Usage in Transformers
```python
import requests
from PIL import Image
from io import BytesIO
from transformers import ViTFeatureExtractor, ViTForImageClassification
# Get example image from official fairface repo + read it in as an image
r = requests.get('https://github.com/dchen236/FairFace/blob/master/detected_faces/race_Asian_face0.jpg?raw=true')
im = Image.open(BytesIO(r.content))
# Init model, transforms
model = ViTForImageClassification.from_pretrained('nateraw/vit-age-classifier')
transforms = ViTFeatureExtractor.from_pretrained('nateraw/vit-age-classifier')
# Transform our image and pass it through the model
inputs = transforms(im, return_tensors='pt')
output = model(**inputs)
# Predicted Class probabilities
proba = output.logits.softmax(1)
# Predicted Classes
preds = proba.argmax(1)
``` |
persiannlp/mt5-large-parsinlu-opus-translation_fa_en | 6ddd645d763bf11573b8764db5a8c66ace5e3bd5 | 2021-09-23T16:20:17.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"machine-translation",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-large-parsinlu-opus-translation_fa_en | 387 | null | transformers | 2,586 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
snunlp/KR-FinBert | 9ccc6ae2a35c65ed7156da9dc1fc5311f69abc3e | 2022-04-28T05:06:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"ko",
"transformers",
"autotrain_compatible"
] | fill-mask | false | snunlp | null | snunlp/KR-FinBert | 387 | null | transformers | 2,587 | ---
language:
- ko
---
# KR-FinBert & KR-FinBert-SC
Much progress has been made in the NLP (Natural Language Processing) field, with numerous studies showing that domain adaptation using small-scale corpus and fine-tuning with labeled data is effective for overall performance improvement.
we proposed KR-FinBert for the financial domain by further pre-training it on a financial corpus and fine-tuning it for sentiment analysis. As many studies have shown, the performance improvement through adaptation and conducting the downstream task was also clear in this experiment.

## Data
The training data for this model is expanded from those of **[KR-BERT-MEDIUM](https://huggingface.co/snunlp/KR-Medium)**, texts from Korean Wikipedia, general news articles, legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). For the transfer learning, **corporate related economic news articles from 72 media sources** such as the Financial Times, The Korean Economy Daily, etc and **analyst reports from 16 securities companies** such as Kiwoom Securities, Samsung Securities, etc are added. Included in the dataset is 440,067 news titles with their content and 11,237 analyst reports. **The total data size is about 13.22GB.** For mlm training, we split the data line by line and **the total no. of lines is 6,379,315.**
KR-FinBert is trained for 5.5M steps with the maxlen of 512, training batch size of 32, and learning rate of 5e-5, taking 67.48 hours to train the model using NVIDIA TITAN XP.
## Citation
```
@misc{kr-FinBert,
author = {Kim, Eunhee and Hyopil Shin},
title = {KR-FinBert: KR-BERT-Medium Adapted With Financial Domain Data},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://huggingface.co/snunlp/KR-FinBert}}
}
``` |
facebook/convnext-base-224-22k | d72d44edaee962fb299e85c445eab8c628f71973 | 2022-02-26T12:19:16.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-21k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-base-224-22k | 386 | null | transformers | 2,588 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (base-sized model)
ConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-224-22k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-224-22k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 22k ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
wannaphong/thaigpt-next-125m | c4a0ee2b6006cdcdc397ac1b77fc368dfd2d4277 | 2021-06-30T17:34:39.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | wannaphong | null | wannaphong/thaigpt-next-125m | 386 | 2 | transformers | 2,589 | # Thai GPT Next
It is fine-tune the GPT-Neo model for Thai language.
GitHub: https://github.com/wannaphong/thaigpt-next
**Dataset for fine-tune this model**
- prachathai67k
- thaisum
- thai_toxicity_tweet
- wongnai reviews
- wisesight_sentiment
- TLC
- scb_mt_enth_2020 (Thai only)
- Thai wikipedia (date: 2021/06/20)
**Max Length:** 280
**Number of train lists**: 1,697,254 lists
**Number of training**: 2 ep
**training loss**: 0.285500
## Model
- thaigpt-next-125m is fine-tune the GPT-NEO-125M model.
## How to use
You can using it from huggingface or PyThaiNLP (in the future) for few-shot learning works or text generation (not recommended).
thaigpt-next-125m at huggingface model: https://huggingface.co/wannaphong/thaigpt-next-125m
## License
> Copyright 2021 Wannaphong Phatthiyaphaibun
>
> Licensed under the Apache License, Version 2.0 (the "License");
> you may not use this file except in compliance with the License.
> You may obtain a copy of the License at
>
> http://www.apache.org/licenses/LICENSE-2.0
>
> Unless required by applicable law or agreed to in writing, software
> distributed under the License is distributed on an "AS IS" BASIS,
> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> See the License for the specific language governing permissions and
> limitations under the License.
## Author
Wannaphong Phatthiyaphaibun |
Jeevesh8/goog_bert_ft_cola-4 | 9124463129de27e85076cf78d260d574dab42e9c | 2022-06-29T17:31:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-4 | 386 | null | transformers | 2,590 | Entry not found |
dsksd/bert-ko-small-minimal | aaf41aa807d3f753634696ee7bf82d126223ee7f | 2021-05-19T16:09:26.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | dsksd | null | dsksd/bert-ko-small-minimal | 385 | null | transformers | 2,591 | Entry not found |
peterchou/ernie-gram | c0f76855dbb2a2055d8d6a076b10398ff4ee170d | 2021-05-21T15:26:54.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | peterchou | null | peterchou/ernie-gram | 385 | null | transformers | 2,592 | Entry not found |
lgrobol/electra-minuscule-generator | 202bce1ff6f7e2ab920354319d735a2186606036 | 2022-01-02T18:50:39.000Z | [
"pytorch",
"electra",
"fill-mask",
"multilingual",
"transformers",
"testing",
"minuscule",
"license:cc0-1.0",
"autotrain_compatible"
] | fill-mask | false | lgrobol | null | lgrobol/electra-minuscule-generator | 384 | null | transformers | 2,593 | ---
language: multilingual
tags:
- electra
- testing
- minuscule
license: "cc0-1.0"
---
ELECTRA-minuscule-generator
===============================
A ridiculously small ELECTRA generator model for testing purposes.
**THIS MODEL HAS NOT BEEN TRAINED, DO NOT EXPECT ANYThING OF IT.**
|
FinanceInc/finbert_fls | 354b28047f9c46cc5d6503ecbbf9c40c272874f4 | 2022-07-22T17:41:31.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"financial-text-analysis",
"forward-looking-statement"
] | text-classification | false | FinanceInc | null | FinanceInc/finbert_fls | 384 | null | transformers | 2,594 | ---
language: "en"
tags:
- financial-text-analysis
- forward-looking-statement
widget:
- text: "We expect the age of our fleet to enhance availability and reliability due to reduced downtime for repairs. "
---
Forward-looking statements (FLS) inform investors of managers’ beliefs and opinions about firm's future events or results. Identifying forward-looking statements from corporate reports can assist investors in financial analysis. FinBERT-FLS is a FinBERT model fine-tuned on 3,500 manually annotated sentences from Management Discussion and Analysis section of annual reports of Russell 3000 firms.
**Input**: A financial text.
**Output**: Specific-FLS , Non-specific FLS, or Not-FLS.
# How to use
You can use this model with Transformers pipeline for forward-looking statement classification.
```python
# tested in transformers==4.18.0
from transformers import BertTokenizer, BertForSequenceClassification, pipeline
finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-fls',num_labels=3)
tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-fls')
nlp = pipeline("text-classification", model=finbert, tokenizer=tokenizer)
results = nlp('We expect the age of our fleet to enhance availability and reliability due to reduced downtime for repairs.')
print(results) # [{'label': 'Specific FLS', 'score': 0.77278733253479}]
```
Visit [FinBERT.AI](https://finbert.ai/) for more details on the recent development of FinBERT. |
Soumyajit1008/DialoGPT-small-harryPotterssen | b580c9bb2aee929622eeddbd2bec7821163e2408 | 2022-02-01T16:20:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Soumyajit1008 | null | Soumyajit1008/DialoGPT-small-harryPotterssen | 383 | null | transformers | 2,595 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
lgrobol/electra-minuscule-discriminator | f3cec9a74ebc921b3cc5a83b814805043b617d13 | 2021-12-30T23:12:11.000Z | [
"pytorch",
"electra",
"token-classification",
"multilingual",
"transformers",
"testing",
"minuscule",
"license:cc0-1.0",
"autotrain_compatible"
] | token-classification | false | lgrobol | null | lgrobol/electra-minuscule-discriminator | 383 | null | transformers | 2,596 | ---
language: multilingual
thumbnail: "url to a thumbnail used in social sharing"
tags:
- electra
- testing
- minuscule
license: "cc0-1.0"
---
ELECTRA-minuscule-discriminator
===============================
A ridiculously small ELECTRA discriminator model for testing purposes.
**THIS MODEL HAS NOT BEEN TRAINED, DO NOT EXPECT ANYThING OF IT.**
|
patrickvonplaten/bert2bert_cnn_daily_mail | 60cef1a6c33d35ea07afb476b561ecc483df1d6f | 2022-06-25T17:06:49.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | patrickvonplaten | null | patrickvonplaten/bert2bert_cnn_daily_mail | 383 | 1 | transformers | 2,597 | ---
language: en
license: apache-2.0
datasets:
- cnn_dailymail
tags:
- summarization
model-index:
- name: patrickvonplaten/bert2bert_cnn_daily_mail
results:
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 41.2808
verified: true
- name: ROUGE-2
type: rouge
value: 18.6853
verified: true
- name: ROUGE-L
type: rouge
value: 28.191
verified: true
- name: ROUGE-LSUM
type: rouge
value: 38.0871
verified: true
- name: loss
type: loss
value: 2.3451855182647705
verified: true
- name: gen_len
type: gen_len
value: 73.8332
verified: true
---
Bert2Bert Summarization with 🤗EncoderDecoder Framework
This model is a warm-started *BERT2BERT* model fine-tuned on the *CNN/Dailymail* summarization dataset.
The model achieves a **18.22** ROUGE-2 score on *CNN/Dailymail*'s test dataset.
For more details on how the model was fine-tuned, please refer to
[this](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) notebook.
|
ChrisVCB/DialoGPT-medium-ej | a98ccda2c32fa1c573c5cad2fba50c199767b0d7 | 2021-12-09T03:22:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ChrisVCB | null | ChrisVCB/DialoGPT-medium-ej | 381 | null | transformers | 2,598 | ---
tags:
- conversational
---
# Eddie Jones DialoGPT Model |
Helsinki-NLP/opus-mt-en-uk | f2466c290a224ec84b2fcccee9c2891c1ab887ca | 2021-09-09T21:40:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-uk | 380 | 1 | transformers | 2,599 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-uk
* source languages: en
* target languages: uk
* OPUS readme: [en-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.uk | 50.2 | 0.674 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.