modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
unicamp-dl/ptt5-large-portuguese-vocab | 6de512d9b277921a6f6d8f009752e1fb0059db56 | 2021-03-24T22:17:55.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"pt",
"dataset:brWaC",
"transformers",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/ptt5-large-portuguese-vocab | 107 | 1 | transformers | 4,500 | ---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
Shitao/msmarco_query_encoder | 0179561052faa00dcbd0e944350ea5f7552930f4 | 2022-04-24T17:01:57.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | Shitao | null | Shitao/msmarco_query_encoder | 107 | null | transformers | 4,501 | ---
license: apache-2.0
---
|
doc2query/msmarco-german-mt5-base-v1 | f0e1c137c34a80d3327b8349eb43b5df30bd9028 | 2022-04-29T09:03:18.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"de",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-german-mt5-base-v1 | 107 | 1 | transformers | 4,502 | ---
language: de
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
license: apache-2.0
---
# doc2query/msmarco-german-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-german-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
Ahmed9275/Vit-Cifar100 | 33677f4f54c5cf3b0057dd374e6de24dafdd67df | 2022-05-19T01:26:45.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:cifar100",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | Ahmed9275 | null | Ahmed9275/Vit-Cifar100 | 107 | 1 | transformers | 4,503 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- cifar100
metrics:
- accuracy
model-index:
- name: vit-base-beans-demo-v5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Cifar100
type: cifar100
args: cifar100
metrics:
- name: Accuracy
type: accuracy
value: 0.8985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Cifar100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4420
- Accuracy: 0.8985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.08 | 1.0 | 3125 | 0.6196 | 0.8262 |
| 0.3816 | 2.0 | 6250 | 0.5322 | 0.8555 |
| 0.1619 | 3.0 | 9375 | 0.4817 | 0.8765 |
| 0.0443 | 4.0 | 12500 | 0.4420 | 0.8985 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
inkoziev/rugpt_interpreter | cce0af88fa2bf3292a8cc057a088a824a3e04ce2 | 2022-06-19T12:02:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers",
"Text generation",
"license:unlicense"
] | text-generation | false | inkoziev | null | inkoziev/rugpt_interpreter | 107 | 3 | transformers | 4,504 | ---
tags: Text generation
license: unlicense
language: ru
widget:
- text: "- Как тебя зовут? - Джульетта Мао #"
- text: "- А живешь где? - В поясе астероидов #"
---
## Задача Incomplete Utterance Restoration
Генеративная модель на основе [sberbank-ai/rugpt3large_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3large_based_on_gpt2) для восстановления полного текста реплик в диалоге из контекста.
Допустим, последние 2 строки диалога имеют вид:
```
- Как тебя зовут?
- Джульетта Мао
```
Модель позволяет получить полный текст последней реплики, с раскрытыми анафорами, эллипсисами и т.д.:
```
Меня зовут Джульетта Мао
```
Раскрытая реплика позволяет использовать многие классические инструменты NLP для своей обработки,
включая регулярные выражения, классификаторы интентов и т.д.
Подробнее о том, какие ситуации и как обрабатываются моделью, смотрите в [конце страницы](#обрабатываемые-ситуации) и в [этом документе](https://huggingface.co/inkoziev/rugpt_interpreter/blob/main/%D0%92%D0%BE%D1%81%D1%81%D1%82%D0%B0%D0%BD%D0%BE%D0%B2%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5%20%D0%BF%D0%BE%D0%BB%D0%BD%D1%8B%D1%85%20%D1%80%D0%B5%D0%BF%D0%BB%D0%B8%D0%BA%20%D0%B2%20%D0%B4%D0%B8%D0%B0%D0%BB%D0%BE%D0%B3%D0%B5.pdf).
## Пример использования
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "inkoziev/rugpt_interpreter"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.add_special_tokens({'bos_token': '<s>', 'eos_token': '</s>', 'pad_token': '<pad>'})
model = AutoModelForCausalLM.from_pretrained(model_name)
model.to(device)
model.eval()
# На вход модели подаем последние 2-3 реплики диалога. Каждая реплика на отдельной строке, начинается с символа "-"
# В конце добавляем символ "#"
input_text = """<s>- Как тебя зовут?
- Джульетта Мао #"""
#input_text = """<s>- Что Предтечи забрали у Предшественников?
#- Они узурпировали у них Мантию — защиту всего живого в галактике #"""
encoded_prompt = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt").to(device)
output_sequences = model.generate(input_ids=encoded_prompt, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id)
text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text)+1:]
text = text[: text.find('</s>')]
print(text)
```
## Формат входных данных
На вход модели подается результат токенизации для текста, составленного из 2 или 3 последних реплик диалога.
Первым токеном должен быть ```<s>```.
Каждая реплика должна начинаться префиксом "- ".
Реплики разделяются символом перевода строки.
К последней реплике, которая будет раскрываться, добавляется подстрока " #".
```
<s>- Как тебя зовут?
- Джульетта Мао #
```
## Обрабатываемые ситуации
Модель разрабатывается с прицелом на использование в [чатботе](https://github.com/Koziev/chatbot). Она поддерживает некоторые
типичные ситуации в читчате, которые перечислены далее.
В примерах после символа ⇒ идет эталонная раскрытая реплика, которую должна сгенерировать модель.
[Эллипсисы](https://ru.wikipedia.org/wiki/%D0%AD%D0%BB%D0%BB%D0%B8%D0%BF%D1%81%D0%B8%D1%81):
```
- Как же тебя зовут, а?
- Меня – Стас, а тебя? ⇒ Меня зовут Стас. Как тебя зовут?
```
В редких случаях и главное слово в словосочетании может опускаться, модель попытается его восстановить:
```
- Мама, купи мне собаку.
- А ты будешь за ней ухаживать?
- А ты мне здоровую купи. ⇒ купи мне здоровую собаку
```
[Анафора](https://ru.wikipedia.org/wiki/%D0%90%D0%BD%D0%B0%D1%84%D0%BE%D1%80%D0%B0_(%D0%BB%D0%B8%D0%BD%D0%B3%D0%B2%D0%B8%D1%81%D1%82%D0%B8%D0%BA%D0%B0)):
```
- Ты собак любишь?
- Не люблю я их ⇒ я не люблю собак
```
Иногда для раскрытия полной реплики требуется привлечение здравого смысла, модель для этого будет опираться
на статистику претрейна:
```
- Мне на голову упала коробка.
- А что в ней было? ⇒ что было в коробке|голове?
```
[Гэппинг](https://ru.wikipedia.org/wiki/%D0%AD%D0%BB%D0%BB%D0%B8%D0%BF%D1%81%D0%B8%D1%81#%D0%93%D1%8D%D0%BF%D0%BF%D0%B8%D0%BD%D0%B3_(en:Gapping)):
```
- Ты кошек любишь?
- Их – нет ⇒ я не люблю кошек
```
Сложный гэппинг:
```
- В 25 лет вы получаете пенсию?
- Не я - отец. ⇒ Я не получаю пенсию. Отец получает пенсию
```
Восстановление необязательного местоименного подлежащего (см. [pro drop](https://en.wikipedia.org/wiki/Pro-drop_language)):
```
- Согласна?
- Да ⇒ я согласна
```
Модель пытается "читать между строк" и восстанавливать подразумеваемые части реплики:
```
- Ты разве ещё не ел?
- Тебя ждал ⇒ я еще не ел. я ждал тебя.
```
Отрицания в диалоге:
```
- Я не прав?
- Нет. (Да.) ⇒ ты не прав
```
Интерпретация не сводится к копированию слов из контекста, иногда модель должна добавить ассоциируемые с ситуацией слова:
```
- Как прошли выходные?
- В Простоквашино ездила... ⇒ я на выходных ездила в Простоквашино
```
Все вышесказанное может быть в разных сочетаниях одновременно:
```
- Где твой кот?
- Жена к ветеринару повезла. ⇒ жена повезла моего кота к ветеринару
- Заболел? ⇒ твой кот заболел?
```
Сложные предложения:
```
- Я сварила суп, иди ешь.
- Из чего? ⇒ из чего ты сварила суп?
```
Замена подлежащего производится, если это улучшает понимание реплики:
```
- Как себя чувствует твой попугай?
- Бедняга умер... ⇒ мой попугай умер
```
Иногда от реплики остается только наречие, модель будет восстанавливать все остальное:
```
- Девушка, а Вы животных любите?
- Очень! ⇒ я очень люблю животных
```
Форма сказуемого иногда может меняться из соображений согласованности:
```
- Рабинович, как думаете, что будет делать правительство, если завтра население разом бросит курить?
- Таки, поднимут акцизы на алкоголь... ⇒ правительно поднимет акцизы на алкоголь, если завтра население разом бросит курить
```
Во всех случаях модель не выдает никакой информации, откуда она взяла подстановку
для замены или заполнения в выходном тексте. На выходе получается просто текст реплики
в том виде, как ее мог бы сказать человек, безо всяких дополнительных отсылок и маркеров:
```
- У тебя брат есть?
- Да, есть
- Где он работает? ⇒ Где работает твой брат?
```
В данном примере модель никак не сообщит нам, откуда она взяла подстановку “твой брат” для местоимения “он”.
Это сильно упрощает ручную разметку обучающего корпуса и не особо мешает диалоговой системе.
Во многих случаях модель приводит порядок слов к более-менее каноническому. Точнее говоря, она старается
выдать текст с таким порядком слов, который обычно используют носители языка в данном контексте диалога.
Если русскоговорящие предпочитают OVS вместо формального SVO, то модель будет выдавать именно OVS:
```
- У тебя штрафы были?
- Нет, их никогда не было ⇒ у меня никогда не было штрафов
```
Модель обычно вставляет личные местоимения, даже если форма глагола позволяет обойтись без них:
```
- Жару любишь?
- Ненавижу ее ⇒ я ненавижу жару
```
Сложносочиненные ответы разбиваются на отдельные клаузы, чтобы downstream pipeline мог обработать их последовательно:
```
- Тебя как зовут?
- Кортана, а тебя как? ⇒ Меня зовут Кортана. Как тебя зовут?
```
В качестве контекста можно подавать последние 2 или 3 реплики. Более длинные отношения весьма редки, чтобы ради них усложнять датасет.
Кроме того, во многих случаях достаточно применить модель рекурсивно - подать вместо исходных реплик диалога
результат их раскрытия моделью:
```
- Где живешь?
- В Шанхае ⇒ я живу в Шанхае
- Давно? ⇒ ты давно живешь в Шанхае?
- Два года уже ⇒ я уже два года живу в Шанхае
- Как там погода? ⇒ как там погода в Шанхае?
```
Последнее, что хочется отметить: модель обучена **только** на диалоговых данных с короткими репликами (читчат).
Она практически не способна раскрывать анафоры в художественных текстах, хотя это не ограничение модели,
а особенность обучающего датасета.
### Citation:
```
@MISC{rugpt_interpreter,
author = {Ilya Koziev},
title = {Incomplete Utterance Restoration in Russian Chit-Chat conversations},
url = {https://huggingface.co/inkoziev/rugpt_interpreter},
year = 2022
}
```
|
keithhon/paraphrase-multilingual-MiniLM-L12-v2 | 95e9642a135281d46946a5fb542732eadfe218ac | 2022-07-25T06:50:16.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"multilingual",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | keithhon | null | keithhon/paraphrase-multilingual-MiniLM-L12-v2 | 107 | null | sentence-transformers | 4,505 | ---
pipeline_tag: sentence-similarity
language: multilingual
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
GroNLP/gpt2-small-italian-embeddings | 471966c990fe69c6eb1d776791bf5aa89ac31f77 | 2021-05-21T09:57:57.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"it",
"arxiv:2012.05628",
"transformers",
"adaption",
"recycled",
"gpt2-small"
] | text-generation | false | GroNLP | null | GroNLP/gpt2-small-italian-embeddings | 106 | null | transformers | 4,506 | ---
language: it
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Italian (small, adapted lexical embeddings)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian-embeddings")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian-embeddings")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c | 378b6a5483d6a3eaedd08a41b11cc61e7ec11896 | 2021-12-22T18:36:19.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:2104.07179",
"arxiv:2106.09449",
"transformers",
"zero-shot-classification"
] | text-classification | false | MoritzLaurer | null | MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c | 106 | null | transformers | 4,507 | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "I first thought that I liked the movie, but upon second thought the movie was actually disappointing. [SEP] The movie was good."
---
# MiniLM-L6-mnli-fever-docnli-ling-2c
## Model description
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data.
The base model is MiniLM-L6 from Microsoft. Which is very fast, but a bit less accurate than other models.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
### Training procedure
MiniLM-L6-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=3, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c
---------|----------|---------|----------|----------
(to upload)
## Limitations and bias
Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m.laurer{at}vu.nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) |
dbmdz/convbert-base-german-europeana-cased | f01a50fead9205fa24189c41bfbc7c4a2d299881 | 2021-02-06T20:38:13.000Z | [
"pytorch",
"tf",
"convbert",
"feature-extraction",
"de",
"transformers",
"historic german",
"license:mit"
] | feature-extraction | false | dbmdz | null | dbmdz/convbert-base-german-europeana-cased | 106 | 1 | transformers | 4,508 | ---
language: de
license: mit
tags:
- "historic german"
---
# 🤗 + 📚 dbmdz ConvBERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a German Europeana ConvBERT model 🎉
# German Europeana ConvBERT
We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/)
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 4.3 our German Europeana ConvBERT model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/convbert-base-german-europeana-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Huggingface model hub
All other German Europeana models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion
[here](https://github.com/stefan-it/europeana-bert/discussions) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗 |
meedan/indian-sbert | df5f9a82da83c8ff832ba17dc6b7979206d6feed | 2021-02-22T22:37:11.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | meedan | null | meedan/indian-sbert | 106 | null | transformers | 4,509 | Entry not found |
phiyodr/roberta-large-finetuned-squad2 | 958e58c18ed6c9ed583bcab0b9bc72d67a08430c | 2021-05-20T19:27:52.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"en",
"dataset:squad2",
"arxiv:1907.11692",
"arxiv:1806.03822",
"transformers",
"autotrain_compatible"
] | question-answering | false | phiyodr | null | phiyodr/roberta-large-finetuned-squad2 | 106 | null | transformers | 4,510 | ---
language: en
tags:
- pytorch
- question-answering
datasets:
- squad2
metrics:
- exact
- f1
widget:
- text: "What discipline did Winkelmann create?"
context: "Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. The prophet and founding hero of modern archaeology, Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art."
---
# roberta-large-finetuned-squad2
## Model description
This model is based on [roberta-large](https://huggingface.co/roberta-large) and was finetuned on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/). The corresponding papers you can found [here (model)](https://arxiv.org/abs/1907.11692) and [here (data)](https://arxiv.org/abs/1806.03822).
## How to use
```python
from transformers.pipelines import pipeline
model_name = "phiyodr/roberta-large-finetuned-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'What discipline did Winkelmann create?',
'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. '
}
nlp(inputs)
```
## Training procedure
```
{
"base_model": "roberta-large",
"do_lower_case": True,
"learning_rate": 3e-5,
"num_train_epochs": 4,
"max_seq_length": 384,
"doc_stride": 128,
"max_query_length": 64,
"batch_size": 96
}
```
## Eval results
- Data: [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json)
- Script: [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) (original script from [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md))
```
{
"exact": 84.38473848227069,
"f1": 87.89711571225455,
"total": 11873,
"HasAns_exact": 80.9885290148448,
"HasAns_f1": 88.02335608157898,
"HasAns_total": 5928,
"NoAns_exact": 87.77123633305298,
"NoAns_f1": 87.77123633305298,
"NoAns_total": 5945
}
```
|
projecte-aina/bart-base-ca-casum | 09bc6142129b80f3ea2b7e5e38a9976e2b41eaff | 2022-07-25T06:48:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"ca",
"dataset:projecte-aina/casum",
"arxiv:2202.06871",
"transformers",
"summarization",
"license:mit",
"autotrain_compatible"
] | summarization | false | projecte-aina | null | projecte-aina/bart-base-ca-casum | 106 | null | transformers | 4,511 | ---
language: "ca"
license: mit
tags:
- summarization
widget:
- text: "El projecte AINA generarà els recursos digitals i lingüístics necessaris per facilitar el desenvolupament d’aplicacions basades en la intel·ligència artificial i les tecnologies de la llengua, com ara els assistents de veu, els traductors automàtics o els agents conversacionals en català. L’objectiu últim és que la ciutadania pugui participar en català en el món digital al mateix nivell que els parlants d’una llengua global, com ara l’anglès, i evitar així l’extinció digital de la llengua. El primer recurs generat és el corpus del català per entrenar els algoritmes d’intel·ligència artificial (IA), el més gran creat fins al moment, amb 1.770 milions de metadades associades a paraules. El proper pas serà generar els models de la llengua, models de la parla i models de traducció utilitzant xarxes neuronals multicapa, perquè les empreses que creen aplicacions basades en intel·ligència artificial (IA), com ara assistents de veu, traductors automàtics, agents conversacionals, etc., puguin fer-ho fàcilment en català."
- text: "El Govern vol que el català també sigui una llengua útil per a la tecnologia i per comunciar-se amb les màquines. Per això, el projecte AINA, impulsat pel Departament de la Vicepresidència, Polítiques Digitals i Territori en col·laboració amb el Barcelona Supercomputing Center (BSC), llançarà el 17 de febrer una campanya de captació de veus per generar el primer corpus o \"diccionari\" de veu del català amb l'objectiu de fer que la tecnologia parli i entengui el català i la ciutadania s'hi pugui relacionar amb aquesta llengua. Per a l'executiu, aquest projecte és d'una \"importància cabdal\", com ha detallat el vicepresident, Jordi Puigneró, també per reforçar la llengua catalana a Internet. El pressupost que s'hi destinarà aquest any és de tres milions d'euros. Per això, amb el lema \"La nostra llengua és la teva veu\", convida la ciutadania de totes les variants dialectals del català ha compartir la seva veu mitjançant la lectura d'uns textos. La fita que s'ha marcat AINA per aquest any és la creació de la primera versió d'aquest diccionari de veus en català, amb \"com més hores de veu i com més diverses millor\". El Govern confia en una bona resposta a la campanya, que arrencarà a partir de demà, i que es desplegarà per tot el territori de parla catalana, per comptar amb diverses variants dialectals. No hi ha limitació d'edat per a qui vulgui participar, i és important que la gent que participi es registri per obtenir més informació sobre genere, edat i distribució geogràfica. Ara com ara hi ha 1.000 hores de veu i el repte és aconseguir arribar a les 2.000 (amb transcripció) aquest any. El vicepresident i conseller de Polítiques Digitals, Jordi Puigneró, ha recordat que fa un any es va donar el tret de sortida al projecte AINA, una aposta per a l'ús del català en l'àmbit tecnològic. El projecte implica un impuls del català en les eines digitals i per \"conquerir nous territoris\", que passen per noves plataformes i nous dispositius. També és un projecte per \"garantir drets\". \"Els catalanes tenim dret a poder relacionar-nos en català amb les maquines i evitar haver de canviar de llengua a l'hora de parlar amb les maquines\", ha remarcat Puigneró. Un altre objectiu d'aquest projecte passa per \"generar talent digital\" i un ecosistema en l'àmbit de la intel·ligència artificial. \"Ens toca ser un país digital\", ha insistit Puigneró. I per què AINA? \"La filla de la Norma, que porta el nom de la seva àvia, Aina Moll, la primera directora de política lingüística de la Generalitat\", ha explicat el vicepresident. Per tot plegat, aquest dimecres arrenca la campanya de captació de veus. \"Volem socialitzar AINA cap a la ciutadania i que molta gent vulgui ser la seva parella lingüística i pugui aprendre el català\", ha dit Puigneró, que ha demanat que aquesta sigui una tasca de tots. El projecte, a dia d'avui, ja coneix la sintaxis del català. En aquesta nova fase, a partir de demà, també ha de conèixer el lèxic i la semàntica, i tota la part oral de la llengua catalana. \"Si ja tenim la columna vertebral i l'esquelet, ara hem de construir la seva musculatura\", ha apuntat el vicepresident. La campanya es farà a través d'una web que permetrà que qualsevol persona pugui ensenyar a AINA a aprendre català. I com es pot fer? És senzill. A partir que arrenqui la campanya, qui estigui interessat en col·laborar haurà d'entrar a www.projecteaina.cat i anar a l'espai corresponent. Un cop allà, haurà de destinar una estona a llegir frases que li proposarà la plataforma i podrà validar també frases d'altres persones."
datasets:
- projecte-aina/casum
---
## BART-Ca fine-tuned on the CaSum dataset for summarization
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
The [BART-ca](https://huggingface.co/projecte-aina/bart-base-ca) model has been fine-tuned on summarization with the [CaSum](https://huggingface.co/datasets/projecte-aina/casum) dataset that has been created along with the model. We also evaluate on an out-of-distribution dataset, [VilaSum](https://huggingface.co/datasets/projecte-aina/vilasum).
The model has been fine-tuned on news articles and is expected to work best with that type of text.
## Intended Uses and Limitations
You can use this model for text summarization.
## How to use
Here is how to use this model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="projecte-aina/bart-base-ca-casum")
ARTICLE = """"El projecte AINA generarà els recursos digitals i lingüístics necessaris per facilitar el desenvolupament d’aplicacions basades en la intel·ligència artificial i les tecnologies de la llengua, com ara els assistents de veu, els traductors automàtics o els agents conversacionals en català. L’objectiu últim és que la ciutadania pugui participar en català en el món digital al mateix nivell que els parlants d’una llengua global, com ara l’anglès, i evitar així l’extinció digital de la llengua. El primer recurs generat és el corpus del català per entrenar els algoritmes d’intel·ligència artificial (IA), el més gran creat fins al moment, amb 1.770 milions de metadades associades a paraules. El proper pas serà generar els models de la llengua, models de la parla i models de traducció utilitzant xarxes neuronals multicapa, perquè les empreses que creen aplicacions basades en intel·ligència artificial (IA), com ara assistents de veu, traductors automàtics, agents conversacionals, etc., puguin fer-ho fàcilment en català."""
print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False))
>>> [{'summary_text': 'El projecte AINA generarà els recursos digitals i lingüístics necessaris per al desenvolupament d’aplicacions basades en la intel·ligència artificial en català’'}]
```
## Training
### Training Data
As training data, we used the [CaSum](https://huggingface.co/datasets/projecte-aina/casum) dataset extracted from a newswire corpus crawled from the [Catalan News Agency](https://www.acn.cat/).
### Training Procedure
#### Tokenization
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) with a vocabulary size of 51,200 tokens.
#### Hyperparameters
The fine-tuning hyperparameters were taken from the fairseq's [Fine-tuning BART on CNN-Dailymail summarization task](https://github.com/facebookresearch/fairseq/blob/main/examples/bart/README.summarization.md) example.
## Evaluation
### Variable and Metrics
We use Rouge-1 and Rouge-L for evaluation on two different test sets: the [CaSum](https://huggingface.co/datasets/projecte-aina/casum) test set and an out of distribution test set, [VilaSum](https://huggingface.co/datasets/projecte-aina/vilasum).
### Evaluation Results
Below the evaluation results on the summarization task compared with the multilingual mBERT and the Catalan [NASCA](https://huggingface.co/ELiRF/NASCA) with two different testsets: [CaSum](https://huggingface.co/datasets/projecte-aina/casum) and [VilaSum](https://huggingface.co/datasets/projecte-aina/vilasum).
|Test set | Model | Rouge-1 | Rouge-L |
| ------------|:-------------:| -----:|:------|
|CaSum | BART-Ca | 41.39 | 36.14 |
| | NASCA | 24.42 | 19.89 |
| | mBART | **43.95** | **38.11** |
|VilaSum | BART-Ca | **35.04** | **29.70** |
| | NASCA | 23.18 | 19.09 |
| | mBART | 33.17 | 27.52 |
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest preprint:
```bibtex
@misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Funding
This work was funded by MT4All CEF project and the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Contributions
[N/A] |
superb/hubert-base-superb-sid | fd0c9962f8a01e274b9a7996e007775293d1d77e | 2021-11-04T16:03:27.000Z | [
"pytorch",
"hubert",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | audio-classification | false | superb | null | superb/hubert-base-superb-sid | 106 | null | transformers | 4,512 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- hubert
- audio-classification
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb Speaker id10004
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
license: apache-2.0
---
# Hubert-Base for Speaker Identification
## Model description
This is a ported version of
[S3PRL's Hubert for the SUPERB Speaker Identification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/voxceleb1).
The base model is [hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class
classification, where speakers are in the same predefined set for both training and testing. The widely
used [VoxCeleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) dataset is adopted
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
classifier = pipeline("audio-classification", model="superb/hubert-base-superb-sid")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-sid")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-base-superb-sid")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.8142` | `0.8071` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
zhiheng-huang/bert-base-uncased-embedding-relative-key | 62d01e1af97f972b2954e691466d84442f3d3659 | 2021-05-20T09:46:58.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhiheng-huang | null | zhiheng-huang/bert-base-uncased-embedding-relative-key | 106 | null | transformers | 4,513 | Entry not found |
doc2query/msmarco-japanese-mt5-base-v1 | 5effea381731300f68a617eec753d82e34f2c096 | 2022-04-29T12:05:37.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ja",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-japanese-mt5-base-v1 | 106 | null | transformers | 4,514 | ---
language: ja
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python(パイソン)はインタープリタ型の高水準汎用プログラミング言語である。グイド・ヴァン・ロッサムにより創り出され、1991年に最初にリリースされたPythonの設計哲学は、有意なホワイトスペース(オフサイドルール)の顕著な使用によってコードの可読性を重視している。その言語構成とオブジェクト指向のアプローチは、プログラマが小規模なプロジェクトから大規模なプロジェクトまで、明確で論理的なコードを書くのを支援することを目的としている。"
license: apache-2.0
---
# doc2query/msmarco-japanese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-japanese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python(パイソン)はインタープリタ型の高水準汎用プログラミング言語である。グイド・ヴァン・ロッサムにより創り出され、1991年に最初にリリースされたPythonの設計哲学は、有意なホワイトスペース(オフサイドルール)の顕著な使用によってコードの可読性を重視している。その言語構成とオブジェクト指向のアプローチは、プログラマが小規模なプロジェクトから大規模なプロジェクトまで、明確で論理的なコードを書くのを支援することを目的としている。"
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
autoevaluate/binary-classification | 5d6b168b009889a2eedbc858ecd212de4a7412c7 | 2022-06-21T13:42:46.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | autoevaluate | null | autoevaluate/binary-classification | 106 | 1 | transformers | 4,515 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: autoevaluate-binary-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8967889908256881
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.8967889908256881
verified: true
- name: Precision
type: precision
value: 0.8898678414096917
verified: true
- name: Recall
type: recall
value: 0.9099099099099099
verified: true
- name: AUC
type: auc
value: 0.967247621453229
verified: true
- name: F1
type: f1
value: 0.8997772828507795
verified: true
- name: loss
type: loss
value: 0.30091655254364014
verified: true
- name: matthews_correlation
type: matthews_correlation
value: 0.793630584795814
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3009
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.175 | 1.0 | 4210 | 0.3009 | 0.8968 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed-tapt | 3031f6d592d10a2c0d9ce8cad5cffa00202762d7 | 2022-06-30T19:24:19.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | domenicrosati | null | domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed-tapt | 106 | null | transformers | 4,516 | ---
license: mit
tags:
- fill-mask
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large-dapt-scientific-papers-pubmed-tapt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-dapt-scientific-papers-pubmed-tapt
This model is a fine-tuned version of [domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed](https://huggingface.co/domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4429
- Accuracy: 0.5915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.3855 | 1.0 | 4134 | 3.2334 | 0.4953 |
| 2.9224 | 2.0 | 8268 | 2.8317 | 0.5430 |
| 2.703 | 3.0 | 12402 | 2.6141 | 0.5665 |
| 2.4963 | 4.0 | 16536 | 2.4918 | 0.5855 |
| 2.399 | 5.0 | 20670 | 2.4429 | 0.5915 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
freedomking/mc-bert | d057742eb21845be851d9b434653e76edbefcb62 | 2022-07-15T10:14:00.000Z | [
"pytorch",
"transformers"
] | null | false | freedomking | null | freedomking/mc-bert | 106 | null | transformers | 4,517 | MC-BERT is a novel conceptualized representation learning approach for the medical domain. First, we use a different mask generation procedure to mask spans of tokens, rather than only random ones. We also introduce two kinds of masking strategies, namely whole entity masking and whole span masking. Finally, MC-BERT split the input document into segments based on the actual "sentences" provided by the user as positive samples and sample random sentences from other documents as negative samples for the next sentence prediction.

More detail:
https://github.com/alibaba-research/ChineseBLUE |
naver-clova-ix/donut-base-finetuned-cord-v1 | 49cf2da80d46da3bc8fa41eff7631848d6d59705 | 2022-07-20T06:01:09.000Z | [
"pytorch",
"donut",
"transformers",
"license:mit"
] | null | false | naver-clova-ix | null | naver-clova-ix/donut-base-finetuned-cord-v1 | 106 | null | transformers | 4,518 | ---
license: mit
---
|
bloom-testing/test-bloomd-350m-generation-inference | c062d86b2739726a4249df07e58f200aa292b613 | 2022-07-27T06:03:47.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"transformers"
] | feature-extraction | false | bloom-testing | null | bloom-testing/test-bloomd-350m-generation-inference | 106 | null | transformers | 4,519 | Entry not found |
Cedille/fr-boris | cb981d4d03b87647b25b7627868bde76420719f9 | 2022-03-15T08:36:54.000Z | [
"pytorch",
"gptj",
"text-generation",
"fr",
"dataset:c4",
"arxiv:2202.03371",
"transformers",
"causal-lm",
"license:mit"
] | text-generation | false | Cedille | null | Cedille/fr-boris | 105 | 21 | transformers | 4,520 | ---
language: fr
license: mit
tags:
- pytorch
- causal-lm
datasets:
- c4
---
# Cedille AI
Cedille is a project to bring large language models to non-English languages.
## fr-boris
Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase.
Boris was trained on around 78B tokens of French text from the [C4](https://huggingface.co/datasets/c4) dataset. We started training from GPT-J, which has been trained on [The Pile](https://pile.eleuther.ai/). As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer.
Boris is named after the great French writer [Boris Vian](https://en.wikipedia.org/wiki/Boris_Vian).
# How do I test Cedille?
For the time being, the easiest way to test the model is to use our [publicly accessible playground](https://en.cedille.ai/).
Cedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at [email protected].
## 📊 Cedille paper
Our paper is out now! https://arxiv.org/abs/2202.03371
Thanks for citing our work if you make use of Cedille
```bibtex
@misc{muller2022cedille,
title={Cedille: A large autoregressive French language model},
author={Martin M{\"{u}}ller and Florian Laurent},
year={2022},
eprint={2202.03371},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact us
For any custom development please contact us at [email protected].
## Links
* [Official website](https://en.cedille.ai/)
* [Blog](https://en.cedille.ai/blog)
* [GitHub](https://github.com/coteries/cedille-ai)
* [Twitter](https://twitter.com/CedilleAI)
|
Helsinki-NLP/opus-mt-id-fr | 308322e870f05e563cb9897ffa664937ecd8de24 | 2021-09-09T22:11:22.000Z | [
"pytorch",
"marian",
"text2text-generation",
"id",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-id-fr | 105 | null | transformers | 4,521 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-id-fr
* source languages: id
* target languages: fr
* OPUS readme: [id-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.id.fr | 43.8 | 0.616 |
|
crabz/FERNET-CC_sk-ner | 713e1e0d030d4c903b070e5ec7116afb2c72b511 | 2021-12-10T18:46:02.000Z | [
"pytorch",
"bert",
"token-classification",
"sk",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | crabz | null | crabz/FERNET-CC_sk-ner | 105 | null | transformers | 4,522 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
language:
- sk
inference: false
model-index:
- name: fernet-sk-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann sk
type: wikiann
args: sk
metrics:
- name: Precision
type: precision
value: 0.9359821760118826
- name: Recall
type: recall
value: 0.9472378804960541
- name: F1
type: f1
value: 0.9415763914830033
- name: Accuracy
type: accuracy
value: 0.9789063466534702
---
# Named Entity Recognition based on FERNET-CC_sk
This model is a fine-tuned version of [fav-kky/FERNET-CC_sk](https://huggingface.co/fav-kky/FERNET-CC_sk) on the Slovak wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1763
- Precision: 0.9360
- Recall: 0.9472
- F1: 0.9416
- Accuracy: 0.9789
## Intended uses & limitation
Supported classes: LOCATION, PERSON, ORGANIZATION
```
from transformers import pipeline
ner_pipeline = pipeline(task='ner', model='crabz/slovakbert-ner')
input_sentence = "Minister financií a líder mandátovo najsilnejšieho hnutia OĽaNO Igor Matovič upozorňuje, že následky tretej vlny budú na Slovensku veľmi veľké."
classifications = ner_pipeline(input_sentence)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1259 | 1.0 | 834 | 0.1095 | 0.8963 | 0.9182 | 0.9071 | 0.9697 |
| 0.071 | 2.0 | 1668 | 0.0974 | 0.9270 | 0.9357 | 0.9313 | 0.9762 |
| 0.0323 | 3.0 | 2502 | 0.1259 | 0.9257 | 0.9330 | 0.9293 | 0.9745 |
| 0.0175 | 4.0 | 3336 | 0.1347 | 0.9241 | 0.9360 | 0.9300 | 0.9756 |
| 0.0156 | 5.0 | 4170 | 0.1407 | 0.9337 | 0.9404 | 0.9370 | 0.9780 |
| 0.0062 | 6.0 | 5004 | 0.1522 | 0.9267 | 0.9410 | 0.9338 | 0.9774 |
| 0.0055 | 7.0 | 5838 | 0.1559 | 0.9322 | 0.9429 | 0.9375 | 0.9780 |
| 0.0024 | 8.0 | 6672 | 0.1733 | 0.9321 | 0.9438 | 0.9379 | 0.9779 |
| 0.0009 | 9.0 | 7506 | 0.1765 | 0.9347 | 0.9468 | 0.9407 | 0.9784 |
| 0.0002 | 10.0 | 8340 | 0.1763 | 0.9360 | 0.9472 | 0.9416 | 0.9789 |
### Framework versions
- Transformers 4.14.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dobbytk/letr-sol-profanity-filter | b020c49bd3c85d0e8d361fe7bd1e78513bf59fed | 2021-10-20T14:11:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | dobbytk | null | dobbytk/letr-sol-profanity-filter | 105 | null | transformers | 4,523 | Entry not found |
microsoft/unispeech-sat-base-plus-sd | 5aba16d1c7a91748fd0f08d26d57587a426aa765 | 2021-12-17T18:40:56.000Z | [
"pytorch",
"unispeech-sat",
"audio-frame-classification",
"en",
"arxiv:1912.07875",
"arxiv:2106.06909",
"arxiv:2101.00390",
"arxiv:2110.05752",
"transformers",
"speech"
] | null | false | microsoft | null | microsoft/unispeech-sat-base-plus-sd | 105 | null | transformers | 4,524 | ---
language:
- en
tags:
- speech
---
# UniSpeech-SAT-Base for Speaker Diarization
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on:
- 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
- 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
- 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Fine-tuning details
The model is fine-tuned on the [LibriMix dataset](https://github.com/JorisCos/LibriMix) using just a linear layer for mapping the network outputs.
# Usage
## Speaker Diarization
```python
from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-base-plus-sd')
model = UniSpeechSatForAudioFrameClassification.from_pretrained('microsoft/unispeech-sat-base-plus-sd')
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt")
logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
Lvxue/finetuned-mt5-base-10epoch | abe81083a36f148167359d5f351339e217fafc87 | 2022-07-14T12:21:17.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"en",
"ro",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Lvxue | null | Lvxue/finetuned-mt5-base-10epoch | 105 | null | transformers | 4,525 | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: finetuned-mt5-base-10epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-mt5-base-10epoch
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
google/ncsnpp-church-256 | 819e60b10ad98bfeac12931806cca04645aa5699 | 2022-07-21T14:39:07.000Z | [
"diffusers",
"arxiv:2011.13456",
"pytorch",
"unconditional-image-generation",
"license:apache-2.0"
] | unconditional-image-generation | false | google | null | google/ncsnpp-church-256 | 105 | null | diffusers | 4,526 | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Score-Based Generative Modeling through Stochastic Differential Equations (SDE)
**Paper**: [Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456)
**Authors**: Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole
**Abstract**:
*Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.*
## Inference
*SDE* models can use **continous** noise schedulers such as:
- [scheduling_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py)
for inference.
See the following code:
```python
# !pip install diffusers
from diffusers import DiffusionPipeline
model_id = "google/ncsnpp-church-256"
# load model and scheduler
sde_ve = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = sde_ve()["sample"]
# save image
image[0].save("sde_ve_generated_image.png")
```
Please take a look at [pipeline_score_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py)
for more details on how to write your own denoising loop.
For more information generally on how to use `diffusers` for inference, please have a look at the [official inference example](https://github.com/patrickvonplaten/notebooks/blob/master/Diffusers.ipynb)
## Samples
1. 
2. 
3. 
4.  |
HScomcom/gpt2-lovecraft | 1b2a25b2ffe28cf5dfb4e3b166ebee53c0a189ab | 2021-05-21T10:38:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | HScomcom | null | HScomcom/gpt2-lovecraft | 104 | 2 | transformers | 4,527 | ### Model information
Fine tuning data: https://www.kaggle.com/bennijesus/lovecraft-fiction
License: CC0: Public Domain
Base model: gpt-2 large
Epoch: 30
Train runtime: 10307.3488 secs
Loss: 0.0292
API page: [Ainize](https://ainize.ai/fpem123/GPT2-LoveCraft?branch=master)
Demo page: [End-point](https://master-gpt2-love-craft-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP===
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
And my other lovecraft model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-lovecraft/71) |
Helsinki-NLP/opus-mt-iir-en | aa50367843cc845ac9310c7f771e1b302eaa5fcd | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-iir-en | 104 | null | transformers | 4,528 | ---
language:
- bn
- or
- gu
- mr
- ur
- hi
- ps
- os
- as
- si
- iir
- en
tags:
- translation
license: apache-2.0
---
### iir-eng
* source group: Indo-Iranian languages
* target group: English
* OPUS readme: [iir-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-eng/README.md)
* model: transformer
* source language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 8.1 | 0.324 |
| newsdev2019-engu-gujeng.guj.eng | 8.1 | 0.309 |
| newstest2014-hien-hineng.hin.eng | 12.1 | 0.380 |
| newstest2019-guen-gujeng.guj.eng | 6.0 | 0.280 |
| Tatoeba-test.asm-eng.asm.eng | 13.9 | 0.327 |
| Tatoeba-test.awa-eng.awa.eng | 7.0 | 0.219 |
| Tatoeba-test.ben-eng.ben.eng | 42.5 | 0.576 |
| Tatoeba-test.bho-eng.bho.eng | 27.3 | 0.452 |
| Tatoeba-test.fas-eng.fas.eng | 5.6 | 0.262 |
| Tatoeba-test.guj-eng.guj.eng | 15.9 | 0.350 |
| Tatoeba-test.hif-eng.hif.eng | 10.1 | 0.247 |
| Tatoeba-test.hin-eng.hin.eng | 36.5 | 0.544 |
| Tatoeba-test.jdt-eng.jdt.eng | 11.4 | 0.094 |
| Tatoeba-test.kok-eng.kok.eng | 6.6 | 0.256 |
| Tatoeba-test.kur-eng.kur.eng | 3.4 | 0.149 |
| Tatoeba-test.lah-eng.lah.eng | 17.4 | 0.301 |
| Tatoeba-test.mai-eng.mai.eng | 65.4 | 0.703 |
| Tatoeba-test.mar-eng.mar.eng | 22.5 | 0.468 |
| Tatoeba-test.multi.eng | 21.3 | 0.424 |
| Tatoeba-test.nep-eng.nep.eng | 3.4 | 0.185 |
| Tatoeba-test.ori-eng.ori.eng | 4.8 | 0.244 |
| Tatoeba-test.oss-eng.oss.eng | 1.6 | 0.173 |
| Tatoeba-test.pan-eng.pan.eng | 14.8 | 0.348 |
| Tatoeba-test.pus-eng.pus.eng | 1.1 | 0.182 |
| Tatoeba-test.rom-eng.rom.eng | 2.8 | 0.185 |
| Tatoeba-test.san-eng.san.eng | 2.8 | 0.185 |
| Tatoeba-test.sin-eng.sin.eng | 22.8 | 0.474 |
| Tatoeba-test.snd-eng.snd.eng | 8.2 | 0.287 |
| Tatoeba-test.tgk-eng.tgk.eng | 11.9 | 0.321 |
| Tatoeba-test.tly-eng.tly.eng | 0.9 | 0.076 |
| Tatoeba-test.urd-eng.urd.eng | 23.9 | 0.438 |
| Tatoeba-test.zza-eng.zza.eng | 0.6 | 0.098 |
### System Info:
- hf_name: iir-eng
- source_languages: iir
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir', 'en']
- src_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.test.txt
- src_alpha3: iir
- tgt_alpha3: eng
- short_pair: iir-en
- chrF2_score: 0.424
- bleu: 21.3
- brevity_penalty: 1.0
- ref_len: 67026.0
- src_name: Indo-Iranian languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: iir
- tgt_alpha2: en
- prefer_old: False
- long_pair: iir-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ja-es | 2631f1ca099e100f0cccf9dad5dc60d67a146775 | 2021-09-10T13:53:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-es | 104 | null | transformers | 4,529 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ja-es
* source languages: ja
* target languages: es
* OPUS readme: [ja-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ja.es | 34.6 | 0.553 |
|
Helsinki-NLP/opus-mt-lt-fr | 0f3bdc8413bb89716e09c90ecb599b0908aea188 | 2021-09-10T13:55:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lt",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lt-fr | 104 | null | transformers | 4,530 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lt-fr
* source languages: lt
* target languages: fr
* OPUS readme: [lt-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lt-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lt-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lt.fr | 22.0 | 0.428 |
|
NovelAI/genji-python-6B | d7eb36c822b24cd9d8fa47087f4e2751841c1a75 | 2021-08-06T19:15:41.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"dataset:the Pile",
"arxiv:2104.09864",
"transformers",
"causal-lm",
"license:apache-2.0"
] | text-generation | false | NovelAI | null | NovelAI/genji-python-6B | 104 | 24 | transformers | 4,531 | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the Pile
---
# Genji-python 6B
For example usage or to easily use the model you can check our colab notebook:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Model Description
Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size.
| Hyperparameter | Value |
|-------------------|--------|
| n_parameters | 6,053,381,344 |
| n_layers | 28* |
| d_model | 4,096 |
| d_ff | 16,384 |
| n_heads | 16 |
| d_head | 256 |
| n_ctx | 2,048 |
| n_vocab | 50,400 (same tokenizer as GPT-2/3) |
| position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
`*` each layer consists of one feedforward block and one self attention block
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile.
## Training procedure
Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06
## Intended Use
This model is trained for assistence on writing python code and having fun trying weird stuff with it.
### How to use
This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.
For now, you need to use this fork:
[Fork](https://github.com/finetuneanon/transformers)
to install with pip:
```bash
pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b
```
This model takes more than 16 gigs of RAM to load. If you want more efficient and faster loading, please check our split model.
We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.
How to use:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GPTNeoForCausalLM,
)
model = AutoModelForCausalLM.from_pretrained("NovelAI/genji-python-6B", use_auth_token=True).half().eval().cuda()
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
text = '''def print_customer_name'''
tokens = tokenizer(text, return_tensors="pt").input_ids
generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id)
last_tokens = generated_tokens[0][len(tokens[0]):]
generated_text = tokenizer.decode(last_tokens)
print("Generation:\n" + generated_text)
```
When ran, this code generates:
```python
Prompt:
def print_customer_name
Generation:
(self, customer):
"""Print the name of a customer."""
if not self.is_valid():
return
print("Customer: {}".format(customer))
```
For example usage, you can see our colab notebook as well:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Eval results
TBD
## Acknowledgements
This project was possible because of the compute provided by the
[TPU Research Cloud](https://sites.research.google/trc/)
and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B.
Thanks to everyone who contributed to this project!
- [Aero](https://github.com/AeroScripts)
- [Finetune](https://github.com/finetuneanon)
- [Kurumuz](https://github.com/kurumuz) |
huawei-noah/TinyBERT_4L_zh | 152bf15d86715b41dd89c1e03cf5664963d9b005 | 2020-10-14T09:03:53.000Z | [
"pytorch",
"transformers"
] | null | false | huawei-noah | null | huawei-noah/TinyBERT_4L_zh | 104 | 3 | transformers | 4,532 | Entry not found |
facebook/flava-image-codebook | 4285ec53336ae34be9337b41d79cee9fc92b7a71 | 2022-05-08T22:06:47.000Z | [
"pytorch",
"flava_image_codebook",
"transformers",
"license:bsd-3-clause"
] | null | false | facebook | null | facebook/flava-image-codebook | 104 | null | transformers | 4,533 | ---
license: bsd-3-clause
---
|
kabelomalapane/en_nso_ukuxhumana_model | e90db4e63605d6ad0e901acf715a0a45e9018065 | 2022-05-21T01:17:17.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/en_nso_ukuxhumana_model | 104 | null | transformers | 4,534 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: en_nso_ukuxhumana_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en_nso_ukuxhumana_model
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-nso](https://huggingface.co/Helsinki-NLP/opus-mt-en-nso) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8482
- Bleu (before training): 12.2324
- Bleu: 18.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
armandnlp/gpt2-TOD_finetuned_SGD | 7201380584b08858dfc7ebd80618f4873894a30c | 2022-07-15T13:57:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | armandnlp | null | armandnlp/gpt2-TOD_finetuned_SGD | 104 | 0 | transformers | 4,535 | ---
pipeline_tag: text-generation
widget:
- text: "<|context|> <|user|> I want to go to the restaurant tomorrow at 2 pm.<|endofcontext|>"
- text: "<|context|> <|user|> I want to go to the restaurant.<|system|> What food would you like to eat ? <|user|> Italian sounds good. <|endofcontext|>"
--- |
rajpurkarlab/biobert-finetuned-prior-rmv | 75526b5c532870f1069c6aafd668704e8f838d30 | 2022-07-19T21:08:13.000Z | [
"pytorch",
"bert",
"token-classification",
"py",
"transformers",
"autotrain_compatible"
] | token-classification | false | rajpurkarlab | null | rajpurkarlab/biobert-finetuned-prior-rmv | 104 | 1 | transformers | 4,536 | ---
language:
- py
metrics:
- f1
---
To use our fine-tuned BioBERT model to remove references to priors from radiology reports, run the following:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
modelname = "rajpurkarlab/biobert-finetuned-prior-rmv"
tokenizer = AutoTokenizer.from_pretrained(modelname)
model = AutoModelForTokenClassification.from_pretrained(modelname)
``` |
Lurunchik/nf-cats | 32b89caba4842c66eec6e9ea0a5b16426781f9ee | 2022-07-18T14:16:02.000Z | [
"pytorch",
"roberta",
"en",
"transformers",
"text-classification",
"license:mit"
] | text-classification | false | Lurunchik | null | Lurunchik/nf-cats | 104 | null | transformers | 4,537 | ---
language:
- en
license: mit
tags:
- text-classification
inference: false
widget:
- text: "Why do we need an NFQA taxonomy?"
---
# Non Factoid Question Category classification in English
## NFQA model
Repository: [https://github.com/Lurunchik/NF-CATS](https://github.com/Lurunchik/NF-CATS)
Model trained with NFQA dataset. Base model is [roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2), a RoBERTa-based model for the task of Question Answering, fine-tuned using the SQuAD2.0 dataset.
Uses `NOT-A-QUESTION`, `FACTOID`, `DEBATE`, `EVIDENCE-BASED`, `INSTRUCTION`, `REASON`, `EXPERIENCE`, `COMPARISON` labels.
## How to use NFQA cat with HuggingFace
##### Load NFQA cat and its tokenizer:
```python
from transformers import AutoTokenizer
from nfqa_model import RobertaNFQAClassification
nfqa_model = RobertaNFQAClassification.from_pretrained("Lurunchik/nf-cats")
nfqa_tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
```
##### Make prediction using helper function:
```python
def get_nfqa_category_prediction(text):
output = nfqa_model(**nfqa_tokenizer(text, return_tensors="pt"))
index = output.logits.argmax()
return nfqa_model.config.id2label[int(index)]
get_nfqa_category_prediction('how to assign category?')
# result
#'INSTRUCTION'
```
## Demo
You can test the model via [hugginface space](https://huggingface.co/spaces/Lurunchik/nf-cats).
[](https://huggingface.co/spaces/Lurunchik/nf-cats)
## Citation
If you use `NFQA-cats` in your work, please cite [this paper](https://dl.acm.org/doi/10.1145/3477495.3531926)
```
@misc{bolotova2022nfcats,
author = {Bolotova, Valeriia and Blinov, Vladislav and Scholer, Falk and Croft, W. Bruce and Sanderson, Mark},
title = {A Non-Factoid Question-Answering Taxonomy},
year = {2022},
isbn = {9781450387323},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531926},
doi = {10.1145/3477495.3531926},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {1196–1207},
numpages = {12},
keywords = {question taxonomy, non-factoid question-answering, editorial study, dataset analysis},
location = {Madrid, Spain},
series = {SIGIR '22}
}
```
Enjoy! 🤗 |
microsoft/codereviewer | bd0e81b54df3cbc7c7a2364a231f700d84de1f34 | 2022-07-25T06:37:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | microsoft | null | microsoft/codereviewer | 104 | 1 | transformers | 4,538 | ---
license: apache-2.0
---
|
Geotrend/distilbert-base-en-fr-cased | c4df3153dd3046d8de953c2e0bd09f784c2b3e01 | 2021-08-16T13:46:47.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-fr-cased | 103 | null | transformers | 4,539 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-fr-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Helsinki-NLP/opus-mt-ja-vi | 84f687d22ffc2a3a79894ff0e8404d71ccf02e18 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"vi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-vi | 103 | null | transformers | 4,540 | ---
language:
- ja
- vi
tags:
- translation
license: apache-2.0
---
### jpn-vie
* source group: Japanese
* target group: Vietnamese
* OPUS readme: [jpn-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-vie/README.md)
* model: transformer-align
* source language(s): jpn jpn_Bopo jpn_Hang jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.vie | 20.3 | 0.380 |
### System Info:
- hf_name: jpn-vie
- source_languages: jpn
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'vi']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: vie
- short_pair: ja-vi
- chrF2_score: 0.38
- bleu: 20.3
- brevity_penalty: 0.909
- ref_len: 10779.0
- src_name: Japanese
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: vi
- prefer_old: False
- long_pair: jpn-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-tn-en | a2a7709c904b9a76939d9512117e3081d7d2bd5a | 2021-09-11T10:48:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tn",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tn-en | 103 | null | transformers | 4,541 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tn-en
* source languages: tn
* target languages: en
* OPUS readme: [tn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tn.en | 43.4 | 0.589 |
|
Helsinki-NLP/opus-mt-trk-en | b9c8cdb8d74d103713f3195d830dec06dc6798bf | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tt",
"cv",
"tk",
"tr",
"ba",
"trk",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-trk-en | 103 | 1 | transformers | 4,542 | ---
language:
- tt
- cv
- tk
- tr
- ba
- trk
- en
tags:
- translation
license: apache-2.0
---
### trk-eng
* source group: Turkic languages
* target group: English
* OPUS readme: [trk-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/trk-eng/README.md)
* model: transformer
* source language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr-tureng.tur.eng | 5.0 | 0.242 |
| newstest2016-entr-tureng.tur.eng | 3.7 | 0.231 |
| newstest2017-entr-tureng.tur.eng | 3.7 | 0.229 |
| newstest2018-entr-tureng.tur.eng | 4.1 | 0.230 |
| Tatoeba-test.aze-eng.aze.eng | 15.1 | 0.330 |
| Tatoeba-test.bak-eng.bak.eng | 3.3 | 0.185 |
| Tatoeba-test.chv-eng.chv.eng | 1.3 | 0.161 |
| Tatoeba-test.crh-eng.crh.eng | 10.8 | 0.325 |
| Tatoeba-test.kaz-eng.kaz.eng | 9.6 | 0.264 |
| Tatoeba-test.kir-eng.kir.eng | 15.3 | 0.328 |
| Tatoeba-test.kjh-eng.kjh.eng | 1.8 | 0.121 |
| Tatoeba-test.kum-eng.kum.eng | 16.1 | 0.277 |
| Tatoeba-test.multi.eng | 12.0 | 0.304 |
| Tatoeba-test.ota-eng.ota.eng | 2.0 | 0.149 |
| Tatoeba-test.sah-eng.sah.eng | 0.7 | 0.140 |
| Tatoeba-test.tat-eng.tat.eng | 4.0 | 0.215 |
| Tatoeba-test.tuk-eng.tuk.eng | 5.5 | 0.243 |
| Tatoeba-test.tur-eng.tur.eng | 26.8 | 0.443 |
| Tatoeba-test.tyv-eng.tyv.eng | 1.3 | 0.111 |
| Tatoeba-test.uig-eng.uig.eng | 0.2 | 0.111 |
| Tatoeba-test.uzb-eng.uzb.eng | 4.6 | 0.195 |
### System Info:
- hf_name: trk-eng
- source_languages: trk
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/trk-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tt', 'cv', 'tk', 'tr', 'ba', 'trk', 'en']
- src_constituents: {'kir_Cyrl', 'tat_Latn', 'tat', 'chv', 'uzb_Cyrl', 'kaz_Latn', 'aze_Latn', 'crh', 'kjh', 'uzb_Latn', 'ota_Arab', 'tuk_Latn', 'tuk', 'tat_Arab', 'sah', 'tyv', 'tur', 'uig_Arab', 'crh_Latn', 'kaz_Cyrl', 'uig_Cyrl', 'kum', 'ota_Latn', 'bak'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.test.txt
- src_alpha3: trk
- tgt_alpha3: eng
- short_pair: trk-en
- chrF2_score: 0.304
- bleu: 12.0
- brevity_penalty: 1.0
- ref_len: 18733.0
- src_name: Turkic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: trk
- tgt_alpha2: en
- prefer_old: False
- long_pair: trk-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
ahmetbagci/bert2bert-turkish-paraphrase-generation | 27c023c0e5bdcf1067c38093c88411c488e4e382 | 2021-10-18T10:17:40.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"tr",
"transformers",
"paraphrasing",
"seq2seq",
"bert",
"autotrain_compatible"
] | text2text-generation | false | ahmetbagci | null | ahmetbagci/bert2bert-turkish-paraphrase-generation | 103 | 4 | transformers | 4,543 | ---
language:
- tr
tags:
- paraphrasing
- encoder-decoder
- seq2seq
- bert
---
#Bert2Bert Turkish Paraphrase Generation
#INISTA 2021
#Comparison of Turkish Paraphrase Generation Models
#Dataset
The dataset used in model training was created with the combination of the translation of the QQP dataset and manually generated dataset.
Dataset [Link](https://drive.google.com/file/d/1-2l9EwIzXZ7fUkNW1vdeF3lzQp2pygp_/view?usp=sharing)
#How To Use
```python
from transformers import BertTokenizerFast,EncoderDecoderModel
tokenizer=BertTokenizerFast.from_pretrained("dbmdz/bert-base-turkish-cased")
model = EncoderDecoderModel.from_pretrained("ahmetbagci/bert2bert-turkish-paraphrase-generation")
text="son model arabalar çevreye daha mı az zarar veriyor?"
input_ids = tokenizer(text, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
#sample output
#son model arabalar çevre için daha az zararlı mı?
```
#Cite
```bibtex
@INPROCEEDINGS{9548335,
author={Bağcı, Ahmet and Amasyali, Mehmet Fatih},
booktitle={2021 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)},
title={Comparison of Turkish Paraphrase Generation Models},
year={2021},
volume={},
number={},
pages={1-6},
doi={10.1109/INISTA52262.2021.9548335}
}
``` |
avichr/hebEMO_surprise | e9f771cd3ef5d3231157b175597d3a34f5aeccb1 | 2022-04-15T09:36:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | avichr | null | avichr/hebEMO_surprise | 103 | null | transformers | 4,544 | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
bhadresh-savani/albert-base-v2-emotion | 4812613b3c07c549e13f09bb266dbf0e59f48de7 | 2021-09-15T18:03:36.000Z | [
"pytorch",
"tf",
"jax",
"albert",
"text-classification",
"en",
"dataset:emotion",
"arxiv:1909.11942",
"transformers",
"emotion",
"license:apache-2.0"
] | text-classification | false | bhadresh-savani | null | bhadresh-savani/albert-base-v2-emotion | 103 | null | transformers | 4,545 | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
---
# Albert-base-v2-emotion
## Model description:
[Albert](https://arxiv.org/pdf/1909.11942v6.pdf) is A Lite BERT architecture that has significantly fewer parameters than a traditional BERT architecture.
[Albert-base-v2](https://huggingface.co/albert-base-v2) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/albert-base-v2-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.010403595864772797},
{'label': 'joy', 'score': 0.8902180790901184},
{'label': 'love', 'score': 0.042532723397016525},
{'label': 'anger', 'score': 0.041297927498817444},
{'label': 'fear', 'score': 0.011772023513913155},
{'label': 'surprise', 'score': 0.0037756056990474463}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```json
{
'test_accuracy': 0.936,
'test_f1': 0.9365658988006296,
'test_loss': 0.15278364717960358,
'test_runtime': 10.9413,
'test_samples_per_second': 182.794,
'test_steps_per_second': 2.925
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
comodoro/wav2vec2-xls-r-300m-west-slavic-cv8 | 4298ddc44143969a783c2d8d72c7de19ae57597d | 2022-03-23T18:27:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"cs",
"hsb",
"pl",
"sk",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | comodoro | null | comodoro/wav2vec2-xls-r-300m-west-slavic-cv8 | 103 | null | transformers | 4,546 | ---
language:
- cs
- hsb
- pl
- sk
- sl
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- xlsr-fine-tuning-week
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-west-slavic-cv8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: cs
metrics:
- name: Test WER
type: wer
value: 53.5
- name: Test CER
type: cer
value: 14.7
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hsb
metrics:
- name: Test WER
type: wer
value: 81.7
- name: Test CER
type: cer
value: 21.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pl
metrics:
- name: Test WER
type: wer
value: 60.2
- name: Test CER
type: cer
value: 15.6
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sk
metrics:
- name: Test WER
type: wer
value: 69.6
- name: Test CER
type: cer
value: 20.7
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sl
metrics:
- name: Test WER
type: wer
value: 73.2
- name: Test CER
type: cer
value: 23.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: cs
metrics:
- name: Test WER
type: wer
value: 84.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: cs
metrics:
- name: Test WER
type: wer
value: 75.99
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pl
metrics:
- name: Test WER
type: wer
value: 65.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pl
metrics:
- name: Test WER
type: wer
value: 72.0
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sk
metrics:
- name: Test WER
type: wer
value: 88.37
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sk
metrics:
- name: Test WER
type: wer
value: 89.08
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sl
metrics:
- name: Test WER
type: wer
value: 87.69
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sl
metrics:
- name: Test WER
type: wer
value: 87.89
---
# wav2vec2-xls-r-300m-west-slavic-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Common Voice 8 dataset of five similar languages with similar scripts: Czech, Slovak, Polish, Slovenian and Upper Sorbian. Training and validation sets were concatenated and shuffled.
Evaluation set used for training was concatenated from the respective test sets and shuffled while limiting each language to at most 2000 samples. During training, cca WER 70 was achieved on this set.
### Evaluation script
```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-west-slavic-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config {lang}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
facebook/wav2vec2-xls-r-2b | 12a34a57dc2d6fa6050b45d848b457dec663de2e | 2021-11-18T16:32:44.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"multilingual",
"dataset:common_voice",
"dataset:multilingual_librispeech",
"arxiv:2111.09296",
"transformers",
"speech",
"xls_r",
"xls_r_pretrained",
"license:apache-2.0"
] | null | false | facebook | null | facebook/wav2vec2-xls-r-2b | 103 | 11 | transformers | 4,547 | ---
language: multilingual
datasets:
- common_voice
- multilingual_librispeech
tags:
- speech
- xls_r
- xls_r_pretrained
license: apache-2.0
---
# Wav2Vec2-XLS-R-2B
[Facebook's Wav2Vec2 XLS-R](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) counting **2 billion** parameters.

XLS-R is Facebook AI's large-scale multilingual pretrained model for speech (the "XLM-R for Speech"). It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. When using the model make sure that your speech input is sampled at 16kHz.
**Note**: This model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Translation, or Classification. Check out [**this blog**](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for more information about ASR.
[XLS-R Paper](https://arxiv.org/abs/2111.09296)
Authors: Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli
**Abstract**
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on 436K hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 20%-33% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this google colab](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLS_R_on_Common_Voice.ipynb) for more information on how to fine-tune the model.
You can find other pretrained XLS-R models with different numbers of parameters:
* [300M parameters version](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
* [1B version version](https://huggingface.co/facebook/wav2vec2-xls-r-1b)
* [2B version version](https://huggingface.co/facebook/wav2vec2-xls-r-2b)
|
google/bert_uncased_L-10_H-128_A-2 | 0c5790f28634a0a84d66543cd3f6967264248f54 | 2021-05-19T17:23:15.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/bert_uncased_L-10_H-128_A-2 | 103 | null | transformers | 4,548 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
google/t5-efficient-xxl | 8f1f4fbd645dacf613cadf2336afc7a6b142a8ec | 2022-02-15T10:57:22.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-xxl | 103 | 1 | transformers | 4,549 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-XXL (Deep-Narrow version)
T5-Efficient-XXL is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-xxl** - is of model type **Xxl** with no variations.
It has **11307.38** million parameters and thus requires *ca.* **45229.52 MB** of memory in full precision (*fp32*)
or **22614.76 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
razent/SciFive-large-Pubmed | 12d03536796368152417dd4702d4eb32265d14a1 | 2022-03-20T17:46:20.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:pubmed",
"arxiv:2106.03598",
"transformers",
"token-classification",
"text-classification",
"question-answering",
"text-generation",
"autotrain_compatible"
] | text-classification | false | razent | null | razent/SciFive-large-Pubmed | 103 | null | transformers | 4,550 | ---
language:
- en
tags:
- token-classification
- text-classification
- question-answering
- text2text-generation
- text-generation
datasets:
- pubmed
---
# SciFive Pubmed Large
## Introduction
Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598)
Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_
## How to use
For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-Pubmed")
model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-Pubmed")
sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ."
text = "ncbi_ner: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
early_stopping=True
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(line)
``` |
seyonec/ChemBERTa-zinc250k-v1 | 9caca1b46a6a3155b4b0ed3bd0772065b989ed3a | 2021-05-20T20:56:13.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/ChemBERTa-zinc250k-v1 | 103 | null | transformers | 4,551 | Entry not found |
voidful/albert_chinese_large | 00c124d3f4e027c43743139fa2dafa3783148eaa | 2021-08-03T05:06:31.000Z | [
"pytorch",
"albert",
"fill-mask",
"zh",
"transformers",
"autotrain_compatible"
] | fill-mask | false | voidful | null | voidful/albert_chinese_large | 103 | 2 | transformers | 4,552 | ---
language: zh
pipeline_tag: fill-mask
widget:
- text: "今天[MASK]情很好"
---
# albert_chinese_large
This a albert_chinese_large model from [Google's github](https://github.com/google-research/ALBERT)
converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py)
## Notice
*Support AutoTokenizer*
Since sentencepiece is not used in albert_chinese_base model
you have to call BertTokenizer instead of AlbertTokenizer !!!
we can eval it using an example on MaskedLM
由於 albert_chinese_base 模型沒有用 sentencepiece
用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!!
我們可以跑MaskedLM預測來驗證這個做法是否正確
## Justify (驗證有效性)
```python
from transformers import AutoTokenizer, AlbertForMaskedLM
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_large'
tokenizer = AutoTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos],dim=-1).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token, logit_prob[predicted_index])
```
Result: `心 0.9422469735145569`
|
malmarjeh/bert2bert | d1c3f06198b726a04c74f7eb4a27077d387844a4 | 2022-06-29T14:14:02.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"ar",
"transformers",
"AraBERT",
"BERT",
"BERT2BERT",
"MSA",
"Arabic Text Summarization",
"Arabic News Title Generation",
"Arabic Paraphrasing",
"autotrain_compatible"
] | text2text-generation | false | malmarjeh | null | malmarjeh/bert2bert | 103 | null | transformers | 4,553 | ---
language:
- ar
tags:
- AraBERT
- BERT
- BERT2BERT
- MSA
- Arabic Text Summarization
- Arabic News Title Generation
- Arabic Paraphrasing
widget:
- text: "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين."
---
# An Arabic abstractive text summarization model
A BERT2BERT-based model whose parameters are initialized with AraBERT weights and which has been fine-tuned on a dataset of 84,764 paragraph-summary pairs.
More details on the fine-tuning of this model will be released later.
The model can be used as follows:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
from arabert.preprocess import ArabertPreprocessor
model_name="malmarjeh/bert2bert"
preprocessor = ArabertPreprocessor(model_name="")
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
pipeline = pipeline("text2text-generation",model=model,tokenizer=tokenizer)
text = "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين."
text = preprocessor.preprocess(text)
result = pipeline(text,
pad_token_id=tokenizer.eos_token_id,
num_beams=3,
repetition_penalty=3.0,
max_length=200,
length_penalty=1.0,
no_repeat_ngram_size = 3)[0]['generated_text']
result
>>> 'مواجهات في طرابلس لليوم الثالث على التوالي'
```
## Contact:
**Mohammad Bani Almarjeh**: [Linkedin](https://www.linkedin.com/in/mohammad-bani-almarjeh/) | <[email protected]>
|
agdsga/chinese-roberta-wwm-ext-large-finetuned-ner | d7ab9f4e8c800cf9b526116a35f346c5c3f7c0e9 | 2022-03-24T15:05:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | agdsga | null | agdsga/chinese-roberta-wwm-ext-large-finetuned-ner | 103 | null | transformers | 4,554 | Entry not found |
IIC/dpr-spanish-passage_encoder-allqa-base | b3ef76d4bc5190d96cbad2aae4b18cc290cd76b1 | 2022-04-02T15:05:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"es",
"dataset:squad_es",
"dataset:PlanTL-GOB-ES/SQAC",
"dataset:IIC/bioasq22_es",
"arxiv:2004.04906",
"transformers",
"sentence similarity",
"passage retrieval",
"model-index",
"autotrain_compatible"
] | fill-mask | false | IIC | null | IIC/dpr-spanish-passage_encoder-allqa-base | 103 | 1 | transformers | 4,555 | ---
language:
- es
tags:
- sentence similarity # Example: audio
- passage retrieval # Example: automatic-speech-recognition
datasets:
- squad_es
- PlanTL-GOB-ES/SQAC
- IIC/bioasq22_es
metrics:
- eval_loss: 0.010779764448327261
- eval_accuracy: 0.9982682224158297
- eval_f1: 0.9446059155411182
- average_rank: 0.11728500598392888
model-index:
- name: dpr-spanish-passage_encoder-allqa-base
results:
- task:
type: text similarity # Required. Example: automatic-speech-recognition
name: text similarity # Optional. Example: Speech Recognition
dataset:
type: squad_es # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: squad_es # Required. Example: Common Voice zh-CN
args: es # Optional. Example: zh-CN
metrics:
- type: loss
value: 0.010779764448327261
name: eval_loss
- type: accuracy
value: 0.9982682224158297
name: accuracy
- type: f1
value: 0.9446059155411182
name: f1
- type: avgrank
value: 0.11728500598392888
name: avgrank
---
[Dense Passage Retrieval](https://arxiv.org/abs/2004.04906)-DPR is a set of tools for performing State of the Art open-domain question answering. It was initially developed by Facebook and there is an [official repository](https://github.com/facebookresearch/DPR). DPR is intended to retrieve the relevant documents to answer a given question, and is composed of 2 models, one for encoding passages and other for encoding questions. This concrete model is the one used for encoding passages.
With this and the [question encoder model](https://huggingface.co/avacaondata/dpr-spanish-question_encoder-allqa-base) we introduce the best passage retrievers in Spanish up to date (to the best of our knowledge), improving over the [previous model we developed](https://huggingface.co/IIC/dpr-spanish-question_encoder-squades-base), by training it for longer and with more data.
Regarding its use, this model should be used to vectorize a question that enters in a Question Answering system, and then we compare that encoding with the encodings of the database (encoded with [the passage encoder](https://huggingface.co/avacaondata/dpr-spanish-passage_encoder-squades-base)) to find the most similar documents , which then should be used for either extracting the answer or generating it.
For training the model, we used a collection of Question Answering datasets in Spanish:
- the Spanish version of SQUAD, [SQUAD-ES](https://huggingface.co/datasets/squad_es)
- [SQAC- Spanish Question Answering Corpus](https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC)
- [BioAsq22-ES](https://huggingface.co/datasets/IIC/bioasq22_es) - we translated this last one by using automatic translation with Transformers.
With this complete dataset we created positive and negative examples for the model (For more information look at [the paper](https://arxiv.org/abs/2004.04906) to understand the training process for DPR). We trained for 25 epochs with the same configuration as the paper. The [previous DPR model](https://huggingface.co/IIC/dpr-spanish-passage_encoder-squades-base) was trained for only 3 epochs with about 60% of the data.
Example of use:
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
model_str = "IIC/dpr-spanish-passage_encoder-allqa-base"
tokenizer = DPRContextEncoderTokenizer.from_pretrained(model_str)
model = DPRContextEncoder.from_pretrained(model_str)
input_ids = tokenizer("Usain Bolt ganó varias medallas de oro en las Olimpiadas del año 2012", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
```
The full metrics of this model on the evaluation split of SQUADES are:
```
eval_loss: 0.010779764448327261
eval_acc: 0.9982682224158297
eval_f1: 0.9446059155411182
eval_acc_and_f1: 0.9714370689784739
eval_average_rank: 0.11728500598392888
```
And the classification report:
```
precision recall f1-score support
hard_negative 0.9991 0.9991 0.9991 1104999
positive 0.9446 0.9446 0.9446 17547
accuracy 0.9983 1122546
macro avg 0.9719 0.9719 0.9719 1122546
weighted avg 0.9983 0.9983 0.9983 1122546
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
Davlan/afro-xlmr-large | 9c59ab30d8d349e9ce36df8b98c2161287e29dc8 | 2022-05-29T12:37:24.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2204.06487",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/afro-xlmr-large | 103 | 1 | transformers | 4,556 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large
results: []
---
# afro-xlmr-large
AfroXLMR-large was created by MLM adaptation of XLM-R-large model on 17 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Naija, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu) covering the major African language families and 3 high resource languages (Arabic, French, and English).
## Eval results on MasakhaNER (F-score)
language| XLM-R-miniLM| XLM-R-base |XLM-R-large | afro-xlmr-large | afro-xlmr-base | afro-xlmr-small | afro-xlmr-mini
-|-|-|-|-|-|-|-
amh |69.5|70.6|76.2|79.7|76.1|70.1|69.7
hau |74.5|89.5|90.5|91.4|91.2|91.4|87.7
ibo |81.9|84.8|84.1|87.7|87.4|86.6|83.5
kin |68.6|73.3|73.8|79.1|78.0|77.5|74.1
lug |64.7|79.7|81.6|86.7|82.9|83.2|77.4
luo |11.7|74.9|73.6|78.1|75.1|75.4|17.5
pcm |83.2|87.3|89.0|91.0|89.6|89.0|85.5
swa |86.3|87.4|89.4|90.4|88.6|88.7|86.0
wol |51.7|63.9|67.9|69.6|67.4|65.9|59.0
yor |72.0|78.3|78.9|85.2|82.1|81.3|75.1
avg |66.4|79.0|80.5|83.9|81.8|80.9|71.6
### BibTeX entry and citation info
```
@misc{afro_maft,
doi = {10.48550/ARXIV.2204.06487},
url = {https://arxiv.org/abs/2204.06487},
author = {Alabi, Jesujoba O. and Adelani, David Ifeoluwa and Mosbach, Marius and Klakow, Dietrich},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multilingual Language Model Adaptive Fine-Tuning: A Study on African Languages},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Narsil/bart-large-mnli-opti | 7a6d021721124565837dc508b48ff299adfcbebb | 2022-05-27T16:08:13.000Z | [
"pytorch",
"bart",
"text-classification",
"dataset:multi_nli",
"arxiv:1910.13461",
"arxiv:1909.00161",
"transformers",
"license:mit",
"zero-shot-classification"
] | zero-shot-classification | false | Narsil | null | Narsil/bart-large-mnli-opti | 103 | null | transformers | 4,557 | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
pipeline_tag: zero-shot-classification
datasets:
- multi_nli
---
# bart-large-mnli
This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/datasets/multi_nli) dataset.
Additional information about this model:
- The [bart-large](https://huggingface.co/facebook/bart-large) model page
- [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
](https://arxiv.org/abs/1910.13461)
- [BART fairseq implementation](https://github.com/pytorch/fairseq/tree/master/fairseq/models/bart)
## NLI-based Zero Shot Text Classification
[Yin et al.](https://arxiv.org/abs/1909.00161) proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of `This text is about politics.`. The probabilities for entailment and contradiction are then converted to label probabilities.
This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html) for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code.
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="facebook/bart-large-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
#{'labels': ['travel', 'dancing', 'cooking'],
# 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289],
# 'sequence': 'one day I will see the world'}
```
If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently:
```python
candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
classifier(sequence_to_classify, candidate_labels, multi_class=True)
#{'labels': ['travel', 'exploration', 'dancing', 'cooking'],
# 'scores': [0.9945111274719238,
# 0.9383890628814697,
# 0.0057061901316046715,
# 0.0018193122232332826],
# 'sequence': 'one day I will see the world'}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli')
tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli')
premise = sequence
hypothesis = f'This example is {label}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
|
juliensimon/distilbert-amazon-shoe-reviews | e11edc9f0634152e952241aa61be10fd8b5ffd79 | 2022-06-29T14:48:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | juliensimon | null | juliensimon/distilbert-amazon-shoe-reviews | 103 | null | transformers | 4,558 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-amazon-shoe-reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-amazon-shoe-reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9532
- Accuracy: 0.5779
- F1: [0.62616119 0.46456105 0.50993865 0.55755123 0.734375 ]
- Precision: [0.62757927 0.46676662 0.49148534 0.58430541 0.72415507]
- Recall: [0.6247495 0.46237624 0.52983172 0.53313982 0.74488753]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------------------------------------:|:--------------------------------------------------------:|
| 0.9713 | 1.0 | 2813 | 0.9532 | 0.5779 | [0.62616119 0.46456105 0.50993865 0.55755123 0.734375 ] | [0.62757927 0.46676662 0.49148534 0.58430541 0.72415507] | [0.6247495 0.46237624 0.52983172 0.53313982 0.74488753] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
erikycd/chatbot_hadita | 97c4ca4463f8e0c4119984354478c5a54dd1bad1 | 2022-07-01T00:08:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:wikipedia",
"transformers",
"conversational",
"license:gpl-3.0"
] | conversational | false | erikycd | null | erikycd/chatbot_hadita | 103 | null | transformers | 4,559 | ---
license: gpl-3.0
tags:
- conversational
- gpt2
language:
- en
datasets:
- wikipedia
widget:
- text: "Where are you from?"
example_title: "Basic question 1"
---
# DialoGPT small base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
import torch
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("erikycd/chatbot_hadita")
model = AutoModelWithLMHead.from_pretrained("erikycd/chatbot_hadita")
exit_commands = ('bye', 'quit')
text = ''
while text not in exit_commands:
text = input('User: ')
input_ids = tokenizer.encode(text + tokenizer.eos_token, return_tensors = "pt")
bot_input_ids = torch.cat([input_ids])
chat_history_ids = model.generate(
bot_input_ids,
max_length = 30,
do_sample = True,
top_p = 0.95,
top_k = 0,
temperature = 0.75,
pad_token_id = tokenizer.eos_token_id
)
output = tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens = True)
print('Chatbot: ', output)
```
|
Geotrend/distilbert-base-ru-cased | 3ca787456bffca9af5a564ac0e866a50af931e6e | 2021-08-16T13:27:34.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"ru",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-ru-cased | 102 | 1 | transformers | 4,560 | ---
language: ru
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-ru-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-ru-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-ru-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Sora4762/DialoGPT-small-naruto1.1 | a863e99e28a313417dc69164b8267039e7758a95 | 2022-01-21T18:04:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Sora4762 | null | Sora4762/DialoGPT-small-naruto1.1 | 102 | null | transformers | 4,561 | ---
tags:
- conversational
---
# Naruto DialoGPT Model1.1 |
google/realm-orqa-nq-reader | ce9401d15939699b680b46c916b0c1955e777dbe | 2022-01-05T18:28:40.000Z | [
"pytorch",
"realm",
"en",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/realm-orqa-nq-reader | 102 | 1 | transformers | 4,562 | ---
language: en
license: apache-2.0
---
# realm-orqa-nq-reader
## Model description
The REALM checkpoint finetuned with Natural Question(NQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmReader
reader = RealmReader.from_pretrained("qqaatw/realm-orqa-nq-reader")
``` |
mrm8488/electricidad-base-generator | 7df1cf48badff8791aa66567e214bc4ff127096a | 2020-12-11T21:54:10.000Z | [
"pytorch",
"electra",
"fill-mask",
"es",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mrm8488 | null | mrm8488/electricidad-base-generator | 102 | 2 | transformers | 4,563 | ---
language: es
thumbnail: https://i.imgur.com/uxAvBfh.png
widget:
- text: "Madrid es una ciudad muy [MASK] en España."
---
## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh)
**Electricidad-base-generator** (uncased) is a ```base``` Electra like model (generator in this case) trained on a + 20 GB of the [OSCAR](https://oscar-corpus.com/) Spanish corpus.
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Fast example of usage 🚀
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="mrm8488/electricidad-base-generator",
tokenizer="mrm8488/electricidad-base-generator"
)
print(
fill_mask(f"HuggingFace está creando {fill_mask.tokenizer.mask_token} que la comunidad usa para resolver tareas de NLP.")
)
# Output: [{'sequence': '[CLS] huggingface esta creando herramientas que la comunidad usa para resolver tareas de nlp. [SEP]', 'score': 0.0896105170249939, 'token': 8760, 'token_str': 'herramientas'}, ...]
```
## Acknowledgments
I thank [🤗/transformers team](https://github.com/huggingface/transformers) for allowing me to train the model (specially to [Julien Chaumond](https://twitter.com/julien_c)).
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
sentence-transformers/multi-qa-distilbert-dot-v1 | 99c2d7a977fac1242833986785da4be605a58c88 | 2021-08-23T18:15:50.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"sentence-transformers",
"feature-extraction",
"sentence-similarity"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/multi-qa-distilbert-dot-v1 | 102 | null | sentence-transformers | 4,564 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# multi-qa-distilbert-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/multi-qa-distilbert-dot-v1')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#CLS Pooling - Take output from first token
def cls_pooling(model_output):
return model_output.last_hidden_state[:,0]
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = cls_pooling(model_output)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-distilbert-dot-v1")
model = AutoModel.from_pretrained("sentence-transformers/multi-qa-distilbert-dot-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 768 |
| Produces normalized embeddings | No |
| Pooling-Method | CLS pooling |
| Suitable score functions | dot-product (e.g. `util.dot_score`) |
----
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages.
Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text.
## Training procedure
The full training script is accessible in this current repository: `train_script.py`.
### Pre-training
We use the pretrained [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
#### Training
We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using CLS-pooling, dot-product as similarity function, and a scale of 1.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** | |
uer/chinese_roberta_L-4_H-128 | e962acf29bd006ef6d13106d63808ce82ad2688e | 2022-07-15T08:11:50.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-4_H-128 | 102 | null | transformers | 4,565 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
alefiury/wav2vec2-xls-r-300m-pt-br-spontaneous-speech-emotion-recognition | 909a1e8a60ce6143a121d36587c0bc10cf79d35c | 2022-04-03T12:38:09.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"pt",
"dataset:coraa_ser",
"dataset:emovo",
"dataset:ravdess",
"dataset:baved",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"italian-speech-corpus",
"english-speech-corpus",
"arabic-speech-corpus",
"spontaneous",
"PyTorch",
"license:apache-2.0"
] | audio-classification | false | alefiury | null | alefiury/wav2vec2-xls-r-300m-pt-br-spontaneous-speech-emotion-recognition | 102 | null | transformers | 4,566 | ---
language: pt
datasets:
- coraa_ser
- emovo
- ravdess
- baved
metrics:
- f1
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- italian-speech-corpus
- english-speech-corpus
- arabic-speech-corpus
- spontaneous
- speech
- PyTorch
license: apache-2.0
model_index:
name: wav2vec2-xls-r-300m-pt-br-spontaneous-speech-emotion-recognition
results:
metrics:
- name: Test Macro F1-Score
type: f1
value: 81.87%
---
# Wav2vec 2.0 XLS-R For Spontaneous Speech Emotion Recognition
This is the model that got first place in the SER track of the Automatic Speech Recognition for spontaneous and prepared speech & Speech Emotion Recognition in Portuguese (SE&R 2022) Workshop.
The following datasets were used in the training:
- [CORAA SER v1.0](https://github.com/rmarcacini/ser-coraa-pt-br/): a dataset composed of spontaneous portuguese speech and approximately 40 minutes of audio segments labeled in three classes: neutral, non-neutral female, and non-neutral male.
- [EMOVO Corpus](https://aclanthology.org/L14-1478/): a database of emotional speech for the Italian language, built from the voices of up to 6 actors who played 14 sentences simulating 6 emotional states (disgust, fear, anger, joy, surprise, sadness) plus the neutral state.
- [RAVDESS](https://zenodo.org/record/1188976#.YO6yI-gzaUk): a dataset that provides 1440 samples of recordings from actors performing on 8 different emotions in English, which are: angry, calm, disgust, fearful, happy, neutral, sad and surprised.
- [BAVED](https://github.com/40uf411/Basic-Arabic-Vocal-Emotions-Dataset): a collection of audio recordings of Arabic words spoken with varying degrees of emotion. The dataset contains seven words: like, unlike, this, file, good, neutral, and bad, which are spoken at three emotional levels: low emotion (tired or feeling down), neutral emotion (the way the speaker speaks daily), and high emotion (positive or negative emotions such as happiness, joy, sadness, anger).
The test set used is a part of the CORAA SER v1.0 that has been set aside for this purpose.
It achieves the following results on the test set:
- Accuracy: 0.9090
- Macro Precision: 0.8171
- Macro Recall: 0.8397
- Macro F1-Score: 0.8187
## Datasets Details
The following image shows the overall distribution of the datasets:

The following image shows the number of instances by label:

## Repository
The repository that implements the model to be trained and tested is avaible [here](https://github.com/alefiury/SE-R-2022-SER-Track). |
microsoft/cvt-21 | 21534c74c3a738a096d7891ea674e2907b299ce6 | 2022-05-18T16:01:27.000Z | [
"pytorch",
"cvt",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.15808",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/cvt-21 | 102 | null | transformers | 4,567 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Convolutional Vision Transformer (CvT)
CvT-21 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Wu et al. and first released in [this repository](https://github.com/microsoft/CvT).
Disclaimer: The team releasing CvT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, CvtForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('microsoft/cvt-21')
model = CvtForImageClassification.from_pretrained('microsoft/cvt-21')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
``` |
allenai/tk-instruct-large-def-pos | c7bf9e3da3c3a5f426d984af8473ac2b0960869a | 2022-05-27T06:31:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:natural instructions v2.0",
"arxiv:1910.10683",
"arxiv:2204.07705",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/tk-instruct-large-def-pos | 102 | null | transformers | 4,568 | ---
language: en
license: apache-2.0
datasets:
- natural instructions v2.0
---
# Model description
Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update.
More resources for using the model:
- **Paper**: [link](https://arxiv.org/abs/2204.07705)
- **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct)
- **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/)
- **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct)
## Intended uses & limitations
Tk-Instruct can be used to do many NLP tasks by following instructions.
### How to use
When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def")
>>> input_ids = tokenizer.encode(
"Definition: return the currency of the given country. Now complete the following example - Input: India. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee'
>>> input_ids = tokenizer.encode(
"Definition: negate the following sentence. Input: John went to school. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.'
```
### Limitations
We are still working on understanding the behaviors of these models, but here are several issues we have found:
- Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output.
- Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story).
- Models might totally fail on some tasks.
If you find serious issues or any interesting result, you are welcome to share with us!
## Training data
Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks).
The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation.
## Training procedure
All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence.
Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time.
Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper).
### BibTeX entry and citation info
```bibtex
@article{wang2022benchmarking,
title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi},
year={2022},
archivePrefix={arXiv},
eprint={2204.07705},
primaryClass={cs.CL},
}
``` |
autoevaluate/extractive-question-answering | a1b9025ceb8c8e7c2b2a8a756c39d2a0f3d13d74 | 2022-07-20T13:18:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | autoevaluate | null | autoevaluate/extractive-question-answering | 102 | null | transformers | 4,569 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: extractive-question-answering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# extractive-question-answering
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
```
{'exact_match': 72.95175023651845,
'f1': 81.85552166092225,
'latency_in_seconds': 0.008616470915042614,
'samples_per_second': 116.05679516125359,
'total_time_in_seconds': 91.07609757200044}
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.263 | 1.0 | 5533 | 1.2169 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
facebook/levit-128 | 6b044a5ff3a3f662b44f1154934406cdc21029c2 | 2022-06-01T13:21:29.000Z | [
"pytorch",
"levit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.01136",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/levit-128 | 102 | null | transformers | 4,570 |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# LeViT
LeViT-128 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-128')
model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-128')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
``` |
yanekyuk/bert-uncased-keyword-discriminator | 608355cf37247134d6f8e89368fae042cd939897 | 2022-06-06T09:27:17.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | yanekyuk | null | yanekyuk/bert-uncased-keyword-discriminator | 102 | null | transformers | 4,571 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- en
widget:
- text: "Broadcom agreed to acquire cloud computing company VMware in a $61 billion (€57bn) cash-and stock deal, massively diversifying the chipmaker’s business and almost tripling its software-related revenue to about 45% of its total sales. By the numbers: VMware shareholders will receive either $142.50 in cash or 0.2520 of a Broadcom share for each VMware stock. Broadcom will also assume $8 billion of VMware's net debt."
- text: "Canadian Natural Resources Minister Jonathan Wilkinson told Bloomberg that the country could start supplying Europe with liquefied natural gas (LNG) in as soon as three years by converting an existing LNG import facility on Canada’s Atlantic coast into an export terminal. Bottom line: Wilkinson said what Canada cares about is that the new LNG facility uses a low-emission process for the gas and is capable of transitioning to exporting hydrogen later on."
- text: "Google is being investigated by the UK’s antitrust watchdog for its dominance in the \"ad tech stack,\" the set of services that facilitate the sale of online advertising space between advertisers and sellers. Google has strong positions at various levels of the ad tech stack and charges fees to both publishers and advertisers. A step back: UK Competition and Markets Authority has also been investigating whether Google and Meta colluded over ads, probing into the advertising agreement between the two companies, codenamed Jedi Blue."
- text: "Shares in Twitter closed 6.35% up after an SEC 13D filing revealed that Elon Musk pledged to put up an additional $6.25 billion of his own wealth to fund the $44 billion takeover deal, lifting the total to $33.5 billion from an initial $27.25 billion. In other news: Former Twitter CEO Jack Dorsey announced he's stepping down, but would stay on Twitter’s board \\“until his term expires at the 2022 meeting of stockholders.\""
model-index:
- name: bert-uncased-keyword-discriminator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-keyword-discriminator
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1296
- Precision: 0.8439
- Recall: 0.8722
- Accuracy: 0.9727
- F1: 0.8578
- Ent/precision: 0.8723
- Ent/accuracy: 0.9077
- Ent/f1: 0.8896
- Con/precision: 0.8010
- Con/accuracy: 0.8196
- Con/f1: 0.8102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | Ent/precision | Ent/accuracy | Ent/f1 | Con/precision | Con/accuracy | Con/f1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|:-------------:|:------------:|:------:|:-------------:|:------------:|:------:|
| 0.1849 | 1.0 | 1875 | 0.1323 | 0.7039 | 0.7428 | 0.9488 | 0.7228 | 0.7522 | 0.8166 | 0.7831 | 0.6268 | 0.6332 | 0.6300 |
| 0.1357 | 2.0 | 3750 | 0.1132 | 0.7581 | 0.8024 | 0.9592 | 0.7796 | 0.7948 | 0.8785 | 0.8346 | 0.6971 | 0.6895 | 0.6933 |
| 0.0965 | 3.0 | 5625 | 0.1033 | 0.8086 | 0.7980 | 0.9646 | 0.8032 | 0.8410 | 0.8592 | 0.8500 | 0.7560 | 0.7071 | 0.7307 |
| 0.0713 | 4.0 | 7500 | 0.1040 | 0.8181 | 0.8435 | 0.9683 | 0.8306 | 0.8526 | 0.8906 | 0.8712 | 0.7652 | 0.7736 | 0.7694 |
| 0.0525 | 5.0 | 9375 | 0.1126 | 0.8150 | 0.8633 | 0.9689 | 0.8385 | 0.8495 | 0.9064 | 0.8770 | 0.7629 | 0.7993 | 0.7807 |
| 0.0386 | 6.0 | 11250 | 0.1183 | 0.8374 | 0.8678 | 0.9719 | 0.8523 | 0.8709 | 0.9020 | 0.8862 | 0.7877 | 0.8170 | 0.8021 |
| 0.03 | 7.0 | 13125 | 0.1237 | 0.8369 | 0.8707 | 0.9723 | 0.8535 | 0.8657 | 0.9079 | 0.8863 | 0.7934 | 0.8155 | 0.8043 |
| 0.0244 | 8.0 | 15000 | 0.1296 | 0.8439 | 0.8722 | 0.9727 | 0.8578 | 0.8723 | 0.9077 | 0.8896 | 0.8010 | 0.8196 | 0.8102 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
facebook/roberta-hate-speech-dynabench-r4-target | 5de477c500cac9cb5865580f6355d5b048bcea1e | 2022-06-10T22:35:56.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2012.15761",
"transformers"
] | text-classification | false | facebook | null | facebook/roberta-hate-speech-dynabench-r4-target | 102 | null | transformers | 4,572 | ---
language: en
---
# LFTW R4 Target
The R4 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! |
DTAI-KULeuven/robbertje-merged-dutch-sentiment | 57a25d121dc660786105af327d9c1070743ac7ce | 2022-06-29T13:12:48.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"dataset:dbrd",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"license:mit",
"model-index"
] | text-classification | false | DTAI-KULeuven | null | DTAI-KULeuven/robbertje-merged-dutch-sentiment | 102 | null | transformers | 4,573 | ---
language: nl
license: mit
datasets:
- dbrd
model-index:
- name: robbertje-merged-dutch-sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: dbrd
type: sentiment-analysis
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9294064748201439
widget:
- text: "Ik erken dat dit een boek is, daarmee is alles gezegd."
- text: "Prachtig verhaal, heel mooi verteld en een verrassend einde... Een topper!"
thumbnail: "https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png"
tags:
- Dutch
- Flemish
- RoBERTa
- RobBERT
---
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch models" width="75%">
</p>
# RobBERTje finetuned for sentiment analysis on DBRD
This is a finetuned model based on [RobBERTje (merged)](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled). We used [DBRD](https://huggingface.co/datasets/dbrd), which consists of book reviews from [hebban.nl](hebban.nl). Hence our example sentences about books. We did some limited experiments to test if this also works for other domains, but this was not exactly amazing.
We released a distilled model and a `base`-sized model. Both models perform quite well, so there is only a slight performance tradeoff:
| Model | Identifier | Layers | #Params. | Accuracy |
|----------------|------------------------------------------------------------------------|--------|-----------|-----------|
| RobBERT (v2) | [`DTAI-KULeuven/robbert-v2-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment) | 12 | 116 M |93.3* |
| RobBERTje - Merged (p=0.5)| [`DTAI-KULeuven/robbertje-merged-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbertje-merged-dutch-sentiment) | 6 | 74 M |92.9 |
*The results of RobBERT are of a different run than the one reported in the paper.
# Training data and setup
We used the [Dutch Book Reviews Dataset (DBRD)](https://huggingface.co/datasets/dbrd) from van der Burgh et al. (2019).
Originally, these reviews got a five-star rating, but this has been converted to positive (⭐️⭐️⭐️⭐️ and ⭐️⭐️⭐️⭐️⭐️), neutral (⭐️⭐️⭐️) and negative (⭐️ and ⭐️⭐️).
We used 19.5k reviews for the training set, 528 reviews for the validation set and 2224 to calculate the final accuracy.
The validation set was used to evaluate a random hyperparameter search over the learning rate, weight decay and gradient accumulation steps.
The full training details are available in [`training_args.bin`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment/blob/main/training_args.bin) as a binary PyTorch file.
# Limitations and biases
- The domain of the reviews is limited to book reviews.
- Most authors of the book reviews were women, which could have caused [a difference in performance for reviews written by men and women](https://www.aclweb.org/anthology/2020.findings-emnlp.292).
## Credits and citation
This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/).
If you would like to cite our paper or models, you can use the following BibTeX:
```
@article{Delobelle_Winters_Berendt_2021,
title = {RobBERTje: A Distilled Dutch BERT Model},
author = {Delobelle, Pieter and Winters, Thomas and Berendt, Bettina},
year = 2021,
month = {Dec.},
journal = {Computational Linguistics in the Netherlands Journal},
volume = 11,
pages = {125–140},
url = {https://www.clinjournal.org/clinj/article/view/131}
}
@inproceedings{delobelle2020robbert,
title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel",
author = "Delobelle, Pieter and
Winters, Thomas and
Berendt, Bettina",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292",
doi = "10.18653/v1/2020.findings-emnlp.292",
pages = "3255--3265"
}
``` |
Helsinki-NLP/opus-mt-de-nl | da037ec1ad70f9d79735c287d418c00158b55b68 | 2021-09-09T21:32:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-nl | 101 | null | transformers | 4,574 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-nl
* source languages: de
* target languages: nl
* OPUS readme: [de-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-nl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-nl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.nl | 52.8 | 0.699 |
|
Helsinki-NLP/opus-mt-es-cs | afa3840361de865521e870a095d9a3441043e11a | 2021-09-09T21:41:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"cs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-cs | 101 | null | transformers | 4,575 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-cs
* source languages: es
* target languages: cs
* OPUS readme: [es-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-cs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-cs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-cs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.cs | 46.4 | 0.655 |
|
VietAI/gpt-j-6B-vietnamese-news | 944f42f2483efdf17438fc905e08c96dcaa7ce94 | 2021-10-10T16:44:53.000Z | [
"pytorch",
"gptj",
"text-generation",
"vi",
"transformers",
"causal-lm"
] | text-generation | false | VietAI | null | VietAI/gpt-j-6B-vietnamese-news | 101 | 1 | transformers | 4,576 | ---
language:
- vi
tags:
- pytorch
- causal-lm
- text-generation
---
# GPT-J 6B for Vietnamese News
Details will be available soon.
For more information, please contact [email protected] / [email protected] / [email protected]. |
avichr/hebEMO_sadness | b8a411d091c6cd285a746631c20cf12dc8f0d61f | 2022-04-15T09:35:47.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | avichr | null | avichr/hebEMO_sadness | 101 | null | transformers | 4,577 | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
cankeles/ConvTasNet_WHAMR_enhsingle_16k | f47048acf872880e504bd92f4252f996d73c3024 | 2022-02-17T19:32:29.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | cankeles | null | cankeles/ConvTasNet_WHAMR_enhsingle_16k | 101 | 1 | asteroid | 4,578 | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `cankeles/ConvTasNet_WHAMR_enhsingle_16k`
Description:
This model was fine tuned on a modified version of WHAMR! where the speakers were taken from audiobook recordings and reverb was added by Pedalboard, Spotify.
The initial model was taken from here: https://huggingface.co/JorisCos/ConvTasNet_Libri1Mix_enhsingle_16k
This model was trained by M. Can Keles using the WHAM recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the WHAM dataset.
Training config:
```yml
data:
mode: min
nondefault_nsrc: null
sample_rate: 16000
task: enh_single
train_dir: wav16k/min/tr/
valid_dir: wav16k/min/cv/
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/tmp
help: null
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments: {}
training:
batch_size: 2
early_stop: true
epochs: 10
half_lr: true
num_workers: 4
```
Results:
```
'sar': 13.612368475881558,
'sar_imp': 9.709316571584433,
'sdr': 13.612368475881558,
'sdr_imp': 9.709316571584433,
'si_sdr': 12.978640274976373,
'si_sdr_imp': 9.161273840297232,
'sir': inf,
'sir_imp': nan,
'stoi': 0.9214516928197306,
'stoi_imp': 0.11657488247668318
```
|
facebook/convnext-large-384-22k-1k | 2a166cf1cbb8b63652775179726b4da8747b312a | 2022-03-02T19:03:42.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-21k",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-large-384-22k-1k | 101 | null | transformers | 4,579 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (large-sized model)
ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-large-384-22k-1k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-384-22k-1k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
hf-internal-testing/tiny-random-clip | 04b741590ff4e18ef4d778c72afe3ca3a680a0c7 | 2021-09-17T19:24:44.000Z | [
"pytorch",
"clip",
"transformers"
] | null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-clip | 101 | null | transformers | 4,580 | Entry not found |
minimaxir/hacker-news | 87e129a1c04a499f390507d310d241aac4fa94f4 | 2021-05-23T09:35:33.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | minimaxir | null | minimaxir/hacker-news | 101 | 1 | transformers | 4,581 | Entry not found |
mmoradi/Robust-Biomed-RoBERTa-TextClassification | feb1f69fa6add99bd301aead3d520fc993f53dec | 2021-10-07T12:29:59.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | mmoradi | null | mmoradi/Robust-Biomed-RoBERTa-TextClassification | 101 | null | transformers | 4,582 | Entry not found |
seyonec/BPE_SELFIES_PubChem_shard00_160k | c97e5b0f514e12056643f9582006305255eed97a | 2021-05-20T20:46:05.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/BPE_SELFIES_PubChem_shard00_160k | 101 | null | transformers | 4,583 | Entry not found |
stevhliu/astroGPT | b28e338b036f2540f0f024f8f297d106d05d8c54 | 2021-05-23T12:59:14.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"en",
"transformers"
] | text-generation | false | stevhliu | null | stevhliu/astroGPT | 101 | null | transformers | 4,584 | ---
language: "en"
thumbnail: "https://raw.githubusercontent.com/stevhliu/satsuma/master/images/astroGPT-thumbnail.png"
widget:
- text: "Jan 18, 2020"
- text: "Feb 14, 2020"
- text: "Jul 04, 2020"
---
# astroGPT 🪐
## Model description
This is a GPT-2 model fine-tuned on Western zodiac signs. For more information about GPT-2, take a look at 🤗 Hugging Face's GPT-2 [model card](https://huggingface.co/gpt2). You can use astroGPT to generate a daily horoscope by entering the current date.
## How to use
To use this model, simply enter the current date like so `Mon DD, YEAR`:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("stevhliu/astroGPT")
model = AutoModelWithLMHead.from_pretrained("stevhliu/astroGPT")
input_ids = tokenizer.encode('Sep 03, 2020', return_tensors='pt').to('cuda')
sample_output = model.generate(input_ids,
do_sample=True,
max_length=75,
top_k=20,
top_p=0.97)
print(sample_output)
```
## Limitations and bias
astroGPT inherits the same biases that affect GPT-2 as a result of training on a lot of non-neutral content on the internet. The model does not currently support zodiac sign-specific generation and only returns a general horoscope. While the generated text may occasionally mention a specific zodiac sign, this is due to how the horoscopes were originally written by it's human authors.
## Data
The data was scraped from [Horoscope.com](https://www.horoscope.com/us/index.aspx) and trained on 4.7MB of text. The text was collected from four categories (daily, love, wellness, career) and span from 09/01/19 to 08/01/2020. The archives only store horoscopes dating a year back from the current date.
## Training and results
The text was tokenized using the fast GPT-2 BPE [tokenizer](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2tokenizerfast). It has a vocabulary size of 50,257 and sequence length of 1024 tokens. The model was trained with on one of Google Colaboratory's GPU's for approximately 2.5 hrs with [fastai's](https://docs.fast.ai/) learning rate finder, discriminative learning rates and 1cycle policy. See table below for a quick summary of the training procedure and results.
| dataset size | epochs | lr | training time | train_loss | valid_loss | perplexity |
|:-------------:|:------:|:-----------------:|:-------------:|:----------:|:----------:|:----------:|
| 5.9MB |32 | slice(1e-7,1e-5) | 2.5 hrs | 2.657170 | 2.642387 | 14.046692 |
|
inovex/multi2convai-logistics-pl-bert | 33b7a64302b6be766af6d73c3f653d2c6e19280b | 2022-03-01T08:54:40.000Z | [
"pytorch",
"bert",
"text-classification",
"pl",
"transformers",
"license:mit"
] | text-classification | false | inovex | null | inovex/multi2convai-logistics-pl-bert | 101 | 2 | transformers | 4,585 | ---
tags:
- text-classification
widget:
- text: "gdzie mogę umieścić paczkę?"
license: mit
language: pl
---
# Multi2ConvAI-Logistics: finetuned Bert for Polish
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Polish (pl)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-pl-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-pl-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram | a8959cb7bd04673d529b4d7b25c8a5ded2870399 | 2022-05-24T11:11:07.000Z | [
"pytorch",
"tf",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram | 101 | 1 | transformers | 4,586 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.84
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.71
---
# Wav2Vec2-Base-960h + 4-gram
This model is identical to [Facebook's Wav2Vec2-Large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self), but is
augmented with an English 4-gram. The `4-gram.arpa.gz` of [Librispeech's official ngrams](https://www.openslr.org/11) is used.
## Evaluation
This code snippet shows how to evaluate **patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torch
from jiwer import wer
model_id = "patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram"
librispeech_eval = load_dataset("librispeech_asr", "other", split="test")
model = AutoModelForCTC.from_pretrained(model_id).to("cuda")
processor = AutoProcessor.from_pretrained(model_id)
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], sampling_rate=16_000, return_tensors="pt")
inputs = {k: v.to("cuda") for k,v in inputs.items()}
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy()).text[0]
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print(wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.84 | 3.71 | |
Servinform/wav2vec2-large-xlsr-53-spanish | 718b1866141dfae432b6d00bddc87dee8ae89691 | 2022-05-24T12:58:17.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"transformers",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Servinform | null | Servinform/wav2vec2-large-xlsr-53-spanish | 101 | null | transformers | 4,587 | ---
language: es
license: apache-2.0
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- es
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 Spanish by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice es
type: common_voice
args: es
metrics:
- name: Test WER
type: wer
value: 8.82
- name: Test CER
type: cer
value: 2.58
- name: Test WER (+LM)
type: wer
value: 6.27
- name: Test CER (+LM)
type: cer
value: 2.06
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Dev WER
type: wer
value: 30.19
- name: Dev CER
type: cer
value: 13.56
- name: Dev WER (+LM)
type: wer
value: 24.71
- name: Dev CER (+LM)
type: cer
value: 12.61
---
# Wav2Vec2-Large-XLSR-53-Spanish
Added custom language model to https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [ASRecognition](https://github.com/jonatasgrosman/asrecognition) library:
```python
from asrecognition import ASREngine
asr = ASREngine("es", model_path="jonatasgrosman/wav2vec2-large-xlsr-53-spanish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = asr.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "es"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-spanish"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| HABITA EN AGUAS POCO PROFUNDAS Y ROCOSAS. | HABITAN AGUAS POCO PROFUNDAS Y ROCOSAS |
| OPERA PRINCIPALMENTE VUELOS DE CABOTAJE Y REGIONALES DE CARGA. | OPERA PRINCIPALMENTE VUELO DE CARBOTAJES Y REGIONALES DE CARGAN |
| PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN. | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN |
| TRES | TRES |
| REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA, PARA CONTINUAR LUEGO EN ESPAÑA. | REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA PARA CONTINUAR LUEGO EN ESPAÑA |
| EN LOS AÑOS QUE SIGUIERON, ESTE TRABAJO ESPARTA PRODUJO DOCENAS DE BUENOS JUGADORES. | EN LOS AÑOS QUE SIGUIERON ESTE TRABAJO ESPARTA PRODUJO DOCENA DE BUENOS JUGADORES |
| SE ESTÁ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS. | SE ESTÓ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS |
| SÍ | SÍ |
| "FUE ""SACADA"" DE LA SERIE EN EL EPISODIO ""LEAD"", EN QUE ALEXANDRA CABOT REGRESÓ." | FUE SACADA DE LA SERIE EN EL EPISODIO LEED EN QUE ALEXANDRA KAOT REGRESÓ |
| SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOKA, EN LA PROVINCIA DE BIOKO SUR. | SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOCA EN LA PROVINCIA DE PÍOCOSUR |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset mozilla-foundation/common_voice_6_0 --config es --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021wav2vec2-large-xlsr-53-spanish,
title={XLSR Wav2Vec2 Spanish by Jonatas Grosman},
author={Grosman, Jonatas},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish}},
year={2021}
}
``` |
ccdv/lsg-bart-base-4096-mediasum | 98a6a8ccfd37dedbf2619774ce9538db3064b603 | 2022-07-25T05:30:36.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:ccdv/mediasum",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ccdv | null | ccdv/lsg-bart-base-4096-mediasum | 101 | null | transformers | 4,588 | ---
language:
- en
tags:
- summarization
datasets:
- ccdv/mediasum
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-4096-mediasum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-mediasum", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-mediasum", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-4096-mediasum
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the [ccdv/mediasum roberta_prepended](https://huggingface.co/datasets/ccdv/mediasum) dataset. \
It achieves the following results on the test set:
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 256 | 0 | 768 | 35.16 | 18.13 | 31.54 | 32.20 |
| 4096 | Local | 128 | 0 | 384 | 34.16 | 17.61 | 30.75 | 31.41 |
| 4096 | Pooling | 128 | 4 | 644 | 34.52 | 17.71 | 31.01 | 31.67 |
| 4096 | Stride | 128 | 4 | 644 | 35.05 | 18.11 | 31.47 | 32.13 |
| 4096 | Block Stride | 128 | 4 | 644 | 34.72 | 17.81 | 31.13 | 31.82 |
| 4096 | Norm | 128 | 4 | 644 | 34.75 | 17.86 | 31.10 | 31.77 |
| 4096 | LSH | 128 | 4 | 644 | 34.54 | 17.81 | 31.05 | 31.71 |
With smaller block size (lower ressources):
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 64 | 0 | 192 | 32.55 | 16.66 | 29.36 | 30.00 |
| 4096 | Local | 32 | 0 | 96 | 30.98 | 15.41 | 27.84 | 28.46 |
| 4096 | Pooling | 32 | 4 | 160 | 31.84 | 16.02 | 28.68 | 29.30 |
| 4096 | Stride | 32 | 4 | 160 | 32.67 | 16.68 | 29.47 | 30.10 |
| 4096 | Block Stride | 32 | 4 | 160 | 32.51 | 16.64 | 29.33 | 29.94 |
| 4096 | Norm | 32 | 4 | 160 | 32.44 | 16.48 | 29.20 | 29.79 |
| 4096 | LSH | 32 | 4 | 160 | 31.79 | 16.04 | 28.67 | 29.31 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: ccdv/mediasum
- dataset_config_name: roberta_prepended
- eval_batch_size: 8
- eval_samples: 10000
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 128
- min_length: 3
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | bc50b6dc1c97dc66998287efb6d044bdaa8f7057 | 2021-10-17T12:09:38.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | 100 | 2 | transformers | 4,589 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-CA Poetry Classification Model
## Model description
**CAMeLBERT-CA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9845284819602966},
{'label': 'الكامل', 'score': 0.752918004989624}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Helsinki-NLP/opus-mt-de-ar | 3abbe7441f40e0657e0dc3e99df5dcaeaa3d323b | 2021-01-18T07:57:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"ar",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-ar | 100 | null | transformers | 4,590 | ---
language:
- de
- ar
tags:
- translation
license: apache-2.0
---
### deu-ara
* source group: German
* target group: Arabic
* OPUS readme: [deu-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ara/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): afb apc ara ara_Latn arq arz
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.ara | 19.7 | 0.486 |
### System Info:
- hf_name: deu-ara
- source_languages: deu
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ar']
- src_constituents: {'deu'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ara/opus-2020-07-03.test.txt
- src_alpha3: deu
- tgt_alpha3: ara
- short_pair: de-ar
- chrF2_score: 0.486
- bleu: 19.7
- brevity_penalty: 0.993
- ref_len: 6324.0
- src_name: German
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: de
- tgt_alpha2: ar
- prefer_old: False
- long_pair: deu-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-ga | f09be38231432610fe90281edb71b1b2f8d91355 | 2021-01-18T08:07:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ga",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ga | 100 | null | transformers | 4,591 | ---
language:
- en
- ga
tags:
- translation
license: apache-2.0
---
### eng-gle
* source group: English
* target group: Irish
* OPUS readme: [eng-gle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): gle
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.gle | 37.5 | 0.593 |
### System Info:
- hf_name: eng-gle
- source_languages: eng
- target_languages: gle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ga']
- src_constituents: {'eng'}
- tgt_constituents: {'gle'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: gle
- short_pair: en-ga
- chrF2_score: 0.593
- bleu: 37.5
- brevity_penalty: 1.0
- ref_len: 12200.0
- src_name: English
- tgt_name: Irish
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: ga
- prefer_old: False
- long_pair: eng-gle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
JorisCos/DCCRNet_Libri1Mix_enhsingle_16k | 00914c64c47932a360b5b4e0c07ab12d305604ba | 2021-09-23T15:49:13.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"DCCRNet",
"audio-to-audio",
"speech-enhancement",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | JorisCos | null | JorisCos/DCCRNet_Libri1Mix_enhsingle_16k | 100 | 4 | asteroid | 4,592 | ---
tags:
- asteroid
- audio
- DCCRNet
- audio-to-audio
- speech-enhancement
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DCCRNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_kernel_size: 400
stft_n_filters: 512
stft_stride: 100
masknet:
architecture: DCCRN-CL
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 12
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.329767398333798
si_sdr_imp: 9.879986092474098
sdr: 13.87279932997016
sdr_imp: 10.370136530757103
sir: Infinity
sir_imp: NaN
sar: 13.87279932997016
sar_imp: 10.370136530757103
stoi: 0.9140907015623948
stoi_imp: 0.11817087802185405
```
License notice:
This work "DCCRNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCCRNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
KBLab/bert-base-swedish-cased-ner | 02bd6181e8c5aa0e6d4c5cd32fe68d5adfb75019 | 2022-06-07T20:08:35.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"sv",
"transformers",
"autotrain_compatible"
] | token-classification | false | KBLab | null | KBLab/bert-base-swedish-cased-ner | 100 | 3 | transformers | 4,593 | ---
language: sv
---
# Swedish BERT Models
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on approximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
- **bert-base-swedish-cased** (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
- **bert-base-swedish-cased-ner** (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
- **albert-base-swedish-cased-alpha** (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
## Files
| **name** | **files** |
|---------------------------------|-----------|
| bert-base-swedish-cased | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/vocab.txt), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/pytorch_model.bin) |
| bert-base-swedish-cased-ner | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/vocab.txt) [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/pytorch_model.bin) |
| albert-base-swedish-cased-alpha | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/config.json), [sentencepiece model](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/spiece.model), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/pytorch_model.bin) |
TensorFlow model weights will be released soon.
## Usage requirements / installation instructions
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the `do_lower_case` flag parameter set to `False` and `keep_accents` to `True` (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
```
# git clone https://github.com/Kungbib/swedish-bert-models
# cd swedish-bert-models
# python3 -m venv venv
# source venv/bin/activate
# pip install --upgrade pip
# pip install -r requirements.txt
```
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/bert-base-swedish-cased')
model = AutoModel.from_pretrained('KBLab/bert-base-swedish-cased')
```
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
```python
from transformers import pipeline
nlp = pipeline('ner', model='KBLab/bert-base-swedish-cased-ner', tokenizer='KBLab/bert-base-swedish-cased-ner')
nlp('Idag släpper KB tre språkmodeller.')
```
Running the Python code above should produce in something like the result below. Entity types used are `TME` for time, `PRS` for personal names, `LOC` for locations, `EVN` for events and `ORG` for organisations. These labels are subject to change.
```python
[ { 'word': 'Idag', 'score': 0.9998126029968262, 'entity': 'TME' },
{ 'word': 'KB', 'score': 0.9814832210540771, 'entity': 'ORG' } ]
```
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with `##`, for example the string `Engelbert kör Volvo till Herrängens fotbollsklubb` gets tokenized as `Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb`. To glue parts back together one can use something like this:
```python
text = 'Engelbert tar Volvon till Tele2 Arena för att titta på Djurgården IF ' +\
'som spelar fotboll i VM klockan två på kvällen.'
l = []
for token in nlp(text):
if token['word'].startswith('##'):
l[-1]['word'] += token['word'][2:]
else:
l += [ token ]
print(l)
```
Which should result in the following (though less cleanly formatted):
```python
[ { 'word': 'Engelbert', 'score': 0.99..., 'entity': 'PRS'},
{ 'word': 'Volvon', 'score': 0.99..., 'entity': 'OBJ'},
{ 'word': 'Tele2', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Arena', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Djurgården', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'IF', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'VM', 'score': 0.99..., 'entity': 'EVN'},
{ 'word': 'klockan', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'två', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'på', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'kvällen', 'score': 0.54..., 'entity': 'TME'} ]
```
### ALBERT base
The easiest way to do this is, again, using Huggingface Transformers:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/albert-base-swedish-cased-alpha'),
model = AutoModel.from_pretrained('KBLab/albert-base-swedish-cased-alpha')
```
## Acknowledgements ❤️
- Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
- Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
- Models are hosted on S3 by Huggingface 🤗
|
TristanBehrens/js-fakes-4bars | 3d52a00e68108008a2fd2143b0a84b53c8e48f07 | 2022-01-11T07:12:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"music-modeling",
"music-generation"
] | text-generation | false | TristanBehrens | null | TristanBehrens/js-fakes-4bars | 100 | 4 | transformers | 4,594 | ---
tags:
- gpt2
- text-generation
- music-modeling
- music-generation
widget:
- text: "PIECE_START"
- text: "PIECE_START STYLE=JSFAKES GENRE=JSFAKES TRACK_START INST=48 BAR_START NOTE_ON=60"
- text: "PIECE_START STYLE=JSFAKES GENRE=JSFAKES TRACK_START INST=48 BAR_START NOTE_ON=58"
---
# GPT-2 for Music
Language Models such as GPT-2 can be used for Music Generation. The idea is to represent pieces of music as texts, effectively reducing the task to Language Generation.
This model is a rather small instance of GPT-2 trained on [TristanBehrens/js-fakes-4bars](https://huggingface.co/datasets/TristanBehrens/js-fakes-4bars). The model generates 4 bars at a time of Bach-like chorales with four voices (soprano, alto, tenor, bass).
If you are contribute, if you want to say hello, if you want to know more, find me on [LinkedIn](https://www.linkedin.com/in/dr-tristan-behrens-734967a2/)
## Model description
The model is GPT-2 with 6 decoders and 8 attention-heads each. The context length is 512. The embedding dimensions are 512 as well. The vocabulary size is 119.
## Intended uses & limitations
This model is just a proof of concept. It shows that HuggingFace can be used to compose music.
### How to use
You can immediately start generating music running these lines of code:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TristanBehrens/js-fakes-4bars")
model = AutoModelForCausalLM.from_pretrained("TristanBehrens/js-fakes-4bars")
input_ids = tokenizer.encode("PIECE_START", return_tensors="pt")
print(input_ids)
generated_ids = model.generate(input_ids, max_length=500)
generated_sequence = tokenizer.decode(generated_ids[0])
print(generated_sequence)
```
Note that this just generates music as a text. In order to actually listen to the generated music, you can use this [notebook](https://huggingface.co/TristanBehrens/js-fakes-4bars/blob/main/colab_jsfakes_generation.ipynb).
### Limitations and bias
Since this model has been trained on a very small corpus of music, it is overfitting heavily.
## Training data
The model has been trained on Omar Peracha's [JS Fake Chorales](https://github.com/omarperacha/js-fakes) dataset, which is a fine collection of 500 Bach-like chorales. |
avichr/hebEMO_anger | 396ae3c5162f891bbb3541c98d0bdf96c678413d | 2022-04-15T09:36:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | avichr | null | avichr/hebEMO_anger | 100 | null | transformers | 4,595 | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
avichr/hebEMO_disgust | 4ead9813ed4ce9643bd013aafb9a60228af3cc4c | 2022-04-15T09:35:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | avichr | null | avichr/hebEMO_disgust | 100 | null | transformers | 4,596 | # HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:[email protected]) <br>
[Inbal yahav](mailto:[email protected]) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
chinhon/fake_tweet_detect | 8b0fd4fe93049f4679a2fd56080f6e4ef80fb2e2 | 2022-01-13T01:45:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | chinhon | null | chinhon/fake_tweet_detect | 100 | 1 | transformers | 4,597 | Entry not found |
cross-encoder/msmarco-MiniLM-L6-en-de-v1 | 368a0096bbd54a6850239d12068a1302c329cc40 | 2021-08-05T08:40:24.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | cross-encoder | null | cross-encoder/msmarco-MiniLM-L6-en-de-v1 | 100 | null | transformers | 4,598 | ---
license: apache-2.0
---
# Cross-Encoder for MS MARCO - EN-DE
This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html).
The training code is available in this repository, see `train_script.py`.
## Usage with SentenceTransformers
When you have [SentenceTransformers](https://www.sbert.net/) installed, you can use the model like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
query = 'How many people live in Berlin?'
docs = ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.']
pairs = [(query, doc) for doc in docs]
scores = model.predict(pairs)
```
## Usage with Transformers
With the transformers library, you can use the model like this:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Performance
The performance was evaluated on three datasets:
- **TREC-DL19 EN-EN**: The original [TREC 2019 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019.html): Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47.
- **TREC-DL19 DE-EN**: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10.
- **GermanDPR DE-DE**: The [GermanDPR](https://www.deepset.ai/germanquad) dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27.
We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search.
| Model-Name | TREC-DL19 EN-EN | TREC-DL19 DE-EN | GermanDPR DE-DE | Docs / Sec |
| ------------- |:-------------:| :-----: | :---: | :----: |
| BM25 | 45.46 | - | 35.85 | -|
| **Cross-Encoder Re-Rankers** | | | |
| [cross-encoder/msmarco-MiniLM-L6-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L6-en-de-v1) | 72.43 | 65.53 | 46.77 | 1600 |
| [cross-encoder/msmarco-MiniLM-L12-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L12-en-de-v1) | 72.94 | 66.07 | 49.91 | 900 |
| [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) (DE only) | - | - | 53.67 | 260 |
| [deepset/gbert-base-germandpr-reranking](https://huggingface.co/deepset/gbert-base-germandpr-reranking) (DE only) | - | - | 53.59 | 260 |
| **Bi-Encoders (re-ranking)** | | | |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) | 63.38 | 58.28 | 37.88 | 940 |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) | 65.51 | 58.69 | 38.32 | 940 |
| [svalabs/bi-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/bi-electra-ms-marco-german-uncased) (DE only) | - | - | 34.31 | 450 |
| [deepset/gbert-base-germandpr-question_encoder](https://huggingface.co/deepset/gbert-base-germandpr-question_encoder) (DE only) | - | - | 42.55 | 450 |
Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
|
gchhablani/bert-base-cased-finetuned-cola | 349865261dd2b7c18501a6662388d72e1aa981ec | 2021-09-20T09:07:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:glue",
"arxiv:2105.03824",
"transformers",
"generated_from_trainer",
"fnet-bert-base-comparison",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/bert-base-cased-finetuned-cola | 100 | null | transformers | 4,599 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-cased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5956649094312695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6747
- Matthews Correlation: 0.5957
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name cola \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-cola \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4921 | 1.0 | 535 | 0.5283 | 0.5068 |
| 0.2837 | 2.0 | 1070 | 0.5133 | 0.5521 |
| 0.1775 | 3.0 | 1605 | 0.6747 | 0.5957 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.