modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
naver/efficient-splade-V-large-doc | c4a9877166fbdfafdfc500431fd4bb1b3565e299 | 2022-07-08T11:37:17.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:ms_marco",
"transformers",
"splade",
"query-expansion",
"document-expansion",
"bag-of-words",
"passage-retrieval",
"knowledge-distillation",
"document encoder",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | naver | null | naver/efficient-splade-V-large-doc | 48 | null | transformers | 6,100 | ---
license: cc-by-nc-sa-4.0
language: "en"
tags:
- splade
- query-expansion
- document-expansion
- bag-of-words
- passage-retrieval
- knowledge-distillation
- document encoder
datasets:
- ms_marco
---
## Efficient SPLADE
Efficient SPLADE model for passage retrieval. This architecture uses two distinct models for query and document inference. This is the **doc** one, please also download the **query** one (https://huggingface.co/naver/efficient-splade-V-large-query). For additional details, please visit:
* paper: https://dl.acm.org/doi/10.1145/3477495.3531833
* code: https://github.com/naver/splade
| | MRR@10 (MS MARCO dev) | R@1000 (MS MARCO dev) | Latency (PISA) ms | Latency (Inference) ms
| --- | --- | --- | --- | --- |
| `naver/efficient-splade-V-large` | 38.8 | 98.0 | 29.0 | 45.3
| `naver/efficient-splade-VI-BT-large` | 38.0 | 97.8 | 31.1 | 0.7
## Citation
If you use our checkpoint, please cite our work (need to update):
```
@inproceedings{10.1145/3477495.3531833,
author = {Lassance, Carlos and Clinchant, St\'{e}phane},
title = {An Efficiency Study for SPLADE Models},
year = {2022},
isbn = {9781450387323},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531833},
doi = {10.1145/3477495.3531833},
abstract = {Latency and efficiency issues are often overlooked when evaluating IR models based on Pretrained Language Models (PLMs) in reason of multiple hardware and software testing scenarios. Nevertheless, efficiency is an important part of such systems and should not be overlooked. In this paper, we focus on improving the efficiency of the SPLADE model since it has achieved state-of-the-art zero-shot performance and competitive results on TREC collections. SPLADE efficiency can be controlled via a regularization factor, but solely controlling this regularization has been shown to not be efficient enough. In order to reduce the latency gap between SPLADE and traditional retrieval systems, we propose several techniques including L1 regularization for queries, a separation of document/query encoders, a FLOPS-regularized middle-training, and the use of faster query encoders. Our benchmark demonstrates that we can drastically improve the efficiency of these models while increasing the performance metrics on in-domain data. To our knowledge, we propose the first neural models that, under the same computing constraints, achieve similar latency (less than 4ms difference) as traditional BM25, while having similar performance (less than 10% MRR@10 reduction) as the state-of-the-art single-stage neural rankers on in-domain data.},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {2220–2226},
numpages = {7},
keywords = {splade, latency, information retrieval, sparse representations},
location = {Madrid, Spain},
series = {SIGIR '22}
}
``` |
shaina/BNER-BERT | f288e4ec3468c598a13db4b3c35938336bde25a8 | 2022-07-13T23:20:51.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | shaina | null | shaina/BNER-BERT | 48 | null | transformers | 6,101 | ---
inference: false
--- |
bloom-testing/test-bloomd-350m-test-push | cb3edd58d34ec3b1c4581bd1bea43ee54c1a4a99 | 2022-07-15T23:38:06.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"transformers"
] | feature-extraction | false | bloom-testing | null | bloom-testing/test-bloomd-350m-test-push | 48 | null | transformers | 6,102 | Entry not found |
khosseini/bert_1760_1850 | 8be0aec78f070ba612216454c2fe30511f3a9a88 | 2022-07-18T09:27:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | khosseini | null | khosseini/bert_1760_1850 | 48 | null | transformers | 6,103 | # Neural Language Models for Nineteenth-Century English: bert_1760_1850
## Introduction
BERT model trained on a large historical dataset of books in English, published between 1760-1850 and comprised of ~1.3 billion tokens.
- Data paper: http://doi.org/10.5334/johd.48
- Github repository: https://github.com/Living-with-machines/histLM
## License
The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.
## Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
## Dataset creators
Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
|
erickdp/gs3n-roberta-model | c138fc92abafacb6a6b705469fe2da06325aea3f | 2022-07-18T16:46:02.000Z | [
"pytorch",
"roberta",
"text-classification",
"es",
"dataset:erixxdp/autotrain-data-gsemodel",
"transformers",
"xerox",
"co2_eq_emissions"
] | text-classification | false | erickdp | null | erickdp/gs3n-roberta-model | 48 | null | transformers | 6,104 | ---
tags: xerox
language: es
widget:
- text: "Debo de levantarme temprano para hacer ejercicio"
datasets:
- erixxdp/autotrain-data-gsemodel
co2_eq_emissions: 0.027846282970913613
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1148842296
- CO2 Emissions (in grams): 0.027846282970913613
## Validation Metrics
- Loss: 0.4816772937774658
- Accuracy: 0.864
- Macro F1: 0.865050349743783
- Micro F1: 0.864
- Weighted F1: 0.865050349743783
- Macro Precision: 0.8706266090178479
- Micro Precision: 0.864
- Weighted Precision: 0.8706266090178482
- Macro Recall: 0.864
- Micro Recall: 0.864
- Weighted Recall: 0.864
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/erixxdp/autotrain-gsemodel-1148842296
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("erixxdp/autotrain-gsemodel-1148842296", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("erixxdp/autotrain-gsemodel-1148842296", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Creepton/DDLCYuri-DialoGPT-small | 47fff3e8af928c4ee10bdf064db1c106ebee93cb | 2022-07-20T19:17:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Creepton | null | Creepton/DDLCYuri-DialoGPT-small | 48 | 1 | transformers | 6,105 | ---
tags:
- conversational
---
# Yuri DialoGPT Model |
lizz27/DialoGPT-small-baymax | e448dd005f9250ba100c79930aec74a90232e7c4 | 2022-07-27T22:11:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lizz27 | null | lizz27/DialoGPT-small-baymax | 48 | null | transformers | 6,106 | ---
tags:
- conversational
---
# Baymax DialoGPT Model |
0x7194633/keyt5-base | 591b9e9121e617461e1a0c8d552109c153610c05 | 2022-01-11T03:52:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | 0x7194633 | null | 0x7194633/keyt5-base | 47 | null | transformers | 6,107 | ---
language:
- ru
license: mit
inference:
parameters:
top_p: 0.9
widget:
- text: "В России может появиться новый штамм коронавируса «омикрон», что может привести к подъему заболеваемости в январе, заявил доцент кафедры инфекционных болезней РУДН Сергей Вознесенский. Он отметил, что вариант «дельта» вызывал больше летальных случаев, чем омикрон, именно на фоне «дельты» была максимальная летальность."
example_title: "Коронавирус"
- text: "Начальника штаба обороны Великобритании адмирала Тони Радакина заставили имитировать активность во время визита в ангар с тяжелым вооружением, сообщила британская пресса. В приказе говорилось, что военнослужащим было велено подбегать к автомобилям, открывать все люки, затворы, листать руководство по эксплуатации и осматриваться машины, будто проводится функциональный тест для обеспечения правильной работы оборудования."
example_title: "Британия"
- text: "Для воспроизведения музыки достаточно нажимать на кнопки клавиатуры. Каждой клавише соответствует определенный семпл — есть маракасы и футуристичные звуки, напоминающие выстрелы бластеров. Из всего многообразия можно формировать собственные паттерны и наблюдать за визуализацией с анимированными геометрическими фигурами. Что интересно, нажатием клавиши пробел можно полностью переменить оформление, цвета на экране и звучание семплов."
example_title: "Технологии"
---
## keyT5. Base (small) version
[](https://github.com/0x7o/text2keywords "Go to GitHub repo")
[](https://github.com/0x7o/text2keywords)
[](https://github.com/0x7o/text2keywords)
Supported languages: ru
Github - [text2keywords](https://github.com/0x7o/text2keywords)
[Pretraining Large version](https://huggingface.co/0x7194633/keyt5-large)
|
[Pretraining Base version](https://huggingface.co/0x7194633/keyt5-base)
# Usage
Example usage (the code returns a list with keywords. duplicates are possible):
[](https://colab.research.google.com/github/0x7o/text2keywords/blob/main/example/keyT5_use.ipynb)
```
pip install transformers sentencepiece
```
```python
from itertools import groupby
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = "0x7194633/keyt5-large" # or 0x7194633/keyt5-base
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def generate(text, **kwargs):
inputs = tokenizer(text, return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(**inputs, num_beams=5, **kwargs)
s = tokenizer.decode(hypotheses[0], skip_special_tokens=True)
s = s.replace('; ', ';').replace(' ;', ';').lower().split(';')[:-1]
s = [el for el, _ in groupby(s)]
return s
article = """Reuters сообщил об отмене 3,6 тыс. авиарейсов из-за «омикрона» и погоды
Наибольшее число отмен авиарейсов 2 января пришлось на американские авиакомпании
SkyWest и Southwest, у каждой — более 400 отмененных рейсов. При этом среди
отмененных 2 января авиарейсов — более 2,1 тыс. рейсов в США. Также свыше 6400
рейсов были задержаны."""
print(generate(article, top_p=1.0, max_length=64))
# ['авиаперевозки', 'отмена авиарейсов', 'отмена рейсов', 'отмена авиарейсов', 'отмена рейсов', 'отмена авиарейсов']
```
# Training
Go to the training notebook and learn more about it:
[](https://colab.research.google.com/github/0x7o/text2keywords/blob/main/example/keyT5_train.ipynb)
|
AriakimTaiyo/DialoGPT-small-Rikka | d15009b1bc06a36b65e4c95a2ecb5621a2941e91 | 2022-02-04T17:37:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AriakimTaiyo | null | AriakimTaiyo/DialoGPT-small-Rikka | 47 | null | transformers | 6,108 | ---
tags:
- conversational
---
# Rikka DialoGPT Model |
BSC-TeMU/roberta-base-bne-capitel-ner-plus | cebd4df6f33dd83209c98ed2fe89e228a59da171 | 2021-10-21T10:29:17.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | BSC-TeMU | null | BSC-TeMU/roberta-base-bne-capitel-ner-plus | 47 | 1 | transformers | 6,109 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
inference:
parameters:
aggregation_strategy: "first"
---
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
**IMPORTANT ABOUT THIS MODEL:** We modified the dataset to make this model more robust to general Spanish input. In the Spanish language all the name entities are capitalized, as this dataset has been elaborated by experts, it is totally correct in terms of Spanish language. We randomly took some entities and we lower-cased some of them for the model to learn not only that the named entities are capitalized, but also the structure of a sentence that should contain a named entity. For instance: "My name is [placeholder]", this [placeholder] should be a named entity even though it is not written capitalized. The model trained on the original capitel dataset can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne-capitel-ner
Examples:
This model:
- "Me llamo asier y vivo en barcelona todo el año." → "Me llamo {as:S-PER}{ier:S-PER} y vivo en {barcelona:S-LOC} todo el año."
- "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." → "Hoy voy a visitar el {par:B-LOC}{k:I-LOC} {gü:E-LOC}{ell:E-LOC} tras salir del {barcelona:B-ORG} {super:I-ORG}{com:I-ORG}{pu:I-ORG}{ting:I-ORG} {cen:E-ORG}{ter:E-ORG}."
Model trained on original data:
- "Me llamo asier y vivo en barcelona todo el año." → "Me llamo asier y vivo en barcelona todo el año." (nothing)
- "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." → "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center." (nothing)
## Evaluation and results
F1 Score: 0.8867
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Geotrend/bert-base-ur-cased | d6dd16d492267f862ed86c3e843594f6203ae3d4 | 2021-05-18T20:14:23.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ur",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-ur-cased | 47 | null | transformers | 6,110 | ---
language: ur
datasets: wikipedia
license: apache-2.0
---
# bert-base-ur-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ur-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-ur-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
IlyaGusev/rubert_ext_sum_gazeta | c0718e3691f0400eb215d230ffaf34cb2f42f391 | 2022-07-13T15:35:22.000Z | [
"pytorch",
"bert",
"token-classification",
"ru",
"dataset:IlyaGusev/gazeta",
"transformers",
"summarization",
"t5",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | IlyaGusev | null | IlyaGusev/rubert_ext_sum_gazeta | 47 | null | transformers | 6,111 | ---
language:
- ru
tags:
- summarization
- token-classification
- t5
datasets:
- IlyaGusev/gazeta
license: apache-2.0
inference: false
widget:
- text: "С 1 сентября в России вступают в силу поправки в закон «О банкротстве» — теперь должники смогут освобождаться от непосильных обязательств во внесудебном порядке, если сумма задолженности составляет не менее 50 тыс. рублей и не превышает 500 тыс. рублей без учета штрафов, пени, процентов за просрочку платежа и прочих имущественных или финансовых санкций.[SEP]У физлиц и индивидуальных предпринимателей появилась возможность пройти процедуру банкротства без участия суда и финансового управляющего — достаточно подать соответствующее заявление через МФЦ.[SEP]Сумму задолженности и список всех известных заявителю кредиторов нужно предоставить самостоятельно.[SEP]Если все условия соблюдены, сведения внесут в Единый федеральный реестр в течение трех рабочих дней.[SEP]При этом на момент подачи заявления в отношении заявителя должно быть окончено исполнительное производство с возвращением исполнительного документа взыскателю.[SEP]Это значит, что у потенциального банкрота не должно быть имущества, которое можно взыскать.[SEP]Кроме того, в отношении гражданина не должно быть возбуждено другое исполнительное производство.[SEP]В период всей процедуры заявитель не сможет брать займы, кредиты, выдавать поручительства, совершать иные обеспечительные сделки.[SEP]Внесудебное банкротство будет длиться шесть месяцев, в течение которых также будет действовать мораторий на удовлетворение требований кредиторов, отмеченных в заявлении должника, и мораторий об уплате обязательных платежей.[SEP]Кроме того, прекращается начисление неустоек и иных финансовых санкций; имущественные взыскания (кроме алиментов) также будут приостановлены.[SEP]По завершению процедуры заявителя освободят от дальнейшего выполнения требований кредиторов, указанных в заявлении о признании его банкротом, а эта задолженность признается безнадежной.[SEP]В прошлом месяце стало известно, что за первое полугодие 2020 года российские суды признали банкротами 42,7 тыс. граждан (в том числе индивидуальных предпринимателей) — по данным единого реестра «Федресурс», это на 47,2% больше показателя аналогичного периода 2019 года.[SEP]Рост числа обанкротившихся граждан во втором квартале по сравнению с первым замедлился — такая динамика обусловлена тем, что в период ограничений с 19 марта по 11 мая суды редко рассматривали банкротные дела компаний и меньше, чем обычно, в отношении граждан, объяснял руководитель проекта «Федресурс» Алексей Юхнин.[SEP]"
example_title: "Новости"
---
# RuBERTExtSumGazeta
## Model description
Model for extractive summarization based on [rubert-base-cased](DeepPavlov/rubert-base-cased)
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1Q8_v3H-kxdJhZIiyLYat7Kj02qDq7M1L)
```python
import razdel
from transformers import AutoTokenizer, BertForTokenClassification
model_name = "IlyaGusev/rubert_ext_sum_gazeta"
tokenizer = AutoTokenizer.from_pretrained(model_name)
sep_token = tokenizer.sep_token
sep_token_id = tokenizer.sep_token_id
model = BertForTokenClassification.from_pretrained(model_name)
article_text = "..."
sentences = [s.text for s in razdel.sentenize(article_text)]
article_text = sep_token.join(sentences)
inputs = tokenizer(
[article_text],
max_length=500,
padding="max_length",
truncation=True,
return_tensors="pt",
)
sep_mask = inputs["input_ids"][0] == sep_token_id
# Fix token_type_ids
current_token_type_id = 0
for pos, input_id in enumerate(inputs["input_ids"][0]):
inputs["token_type_ids"][0][pos] = current_token_type_id
if input_id == sep_token_id:
current_token_type_id = 1 - current_token_type_id
# Infer model
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits[0, :, 1]
# Choose sentences
logits = logits[sep_mask]
logits, indices = logits.sort(descending=True)
logits, indices = logits.cpu().tolist(), indices.cpu().tolist()
pairs = list(zip(logits, indices))
pairs = pairs[:3]
indices = list(sorted([idx for _, idx in pairs]))
summary = " ".join([sentences[idx] for idx in indices])
print(summary)
```
#### Limitations and bias
- The model should work well with Gazeta.ru articles, but for any other agencies it can suffer from domain shift
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
TBD
## Eval results
TBD
Evaluation: https://github.com/IlyaGusev/summarus/blob/master/evaluate.py
Flags: --language ru --tokenize-after --lower
|
Langboat/mengzi-oscar-base-retrieval | 84222de18113c7d2806dc9d2bc042aebbaa8c1b4 | 2021-10-14T02:18:16.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Langboat | null | Langboat/mengzi-oscar-base-retrieval | 47 | 2 | transformers | 6,112 | ---
language:
- zh
license: apache-2.0
---
# Mengzi-oscar-base-retrieval (Chinese Image-text retrieval model)
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
Mengzi-oscar-base-retrieval is fine-tuned based on Chinese multi-modal pre-training model [Mengzi-Oscar](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md), on COCO-ir dataset.
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Llamacha/QuBERTa | 0f6c96c475e8ff91eba4bc73719aadf264de444e | 2022-02-07T09:14:51.000Z | [
"pytorch",
"roberta",
"fill-mask",
"qu",
"transformers",
"Llamacha",
"autotrain_compatible"
] | fill-mask | false | Llamacha | null | Llamacha/QuBERTa | 47 | null | transformers | 6,113 | ---
language:
- qu
tags:
- Llamacha
---
# QuBERTa
QuBERTa es un modelo de lenguaje basado en RoBERTa para el quechua. Nuestro modelo de lenguaje fue pre-entrenado con 5M de tokens del quechua sureño (Collao y Chanka).
El modelo utiliza un tokenizador Byte-level BPE con un vocabulario de 52000 tokens de subpalabras.
## Usabilidad
Una vez descargado los pesos y el tokenizador es necesario adjuntarlo en un sola carpeta, en este caso fue `QuBERTa `.
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="./QuBERTa",
tokenizer="./QuBERTa"
)
```
Se hace la prueba, la cual esta en fases de mejoras.
```python
fill_mask("allinllachu <mask> allinlla huk wasipita.")
```
[{'score': 0.23992203176021576,
'sequence': 'allinllachu nisqaqa allinlla huk wasipita.',
'token': 334,
'token_str': ' nisqaqa'},
{'score': 0.061005301773548126,
'sequence': 'allinllachu, allinlla huk wasipita.',
'token': 16,
'token_str': ','},
{'score': 0.028720015659928322,
'sequence': "allinllachu' allinlla huk wasipita.",
'token': 11,
'token_str': "'"},
{'score': 0.012927944771945477,
'sequence': 'allinllachu kay allinlla huk wasipita.',
'token': 377,
'token_str': ' kay'},
{'score': 0.01230092253535986,
'sequence': 'allinllachu. allinlla huk wasipita.',
'token': 18,
'token_str': '.'}]
|
Luciano/bert-base-portuguese-cased-finetuned-peticoes | 1b7f9dfb8b17c1e38d4ddade535130d9f60ceb16 | 2022-02-18T10:20:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"pt",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Luciano | null | Luciano/bert-base-portuguese-cased-finetuned-peticoes | 47 | null | transformers | 6,114 | ---
language:
- pt
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-base-portuguese-cased-finetuned-peticoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased-finetuned-peticoes
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 215 | 1.1349 |
| No log | 2.0 | 430 | 1.0925 |
| 1.3219 | 3.0 | 645 | 1.0946 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
NtDNlp/sentence-embedding-vietnamese | d7500f88bb1558916656dec663644ea3c69a00d0 | 2021-05-27T08:51:12.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | NtDNlp | null | NtDNlp/sentence-embedding-vietnamese | 47 | null | transformers | 6,115 | #EmbeddingSimilarityEvaluator: Evaluating the model on STS.en-en.txt dataset in epoch 2 after 26000 steps:
| Type | Pearson | Spearman |
| ----------- | ----------- | ----------- |
| Cosine | 0.7650 | 0.8095 |
| Euclidean | 0.8089 | 0.8010 |
| Cosine | 0.8075 | 0.7999 |
| Euclidean | 0.7531 | 0.7680
|
Rifky/IndoBERT-FakeNews | 792857fe1e032fc18077a80ba67ee4831052b1ee | 2021-09-16T17:54:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Rifky | null | Rifky/IndoBERT-FakeNews | 47 | null | transformers | 6,116 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: IndoBERT-FakeNews
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndoBERT-FakeNews
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 222 | 0.2507 |
| No log | 2.0 | 444 | 0.3830 |
| 0.2755 | 3.0 | 666 | 0.5660 |
| 0.2755 | 4.0 | 888 | 0.5165 |
| 0.1311 | 5.0 | 1110 | 0.5573 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
SEBIS/legal_t5_small_summ_en | a78a8c462825accf01c77b6e307d09f47a1d2f45 | 2021-06-23T11:21:55.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English",
"dataset:jrc-acquis",
"transformers",
"summarization English model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_summ_en | 47 | null | transformers | 6,117 |
---
language: English
tags:
- summarization English model
datasets:
- jrc-acquis
widget:
- text: >
THE COMMISSION OF THE EUROPEAN COMMUNITIES, Having regard to the Treaty establishing
the European Community, Having regard to Council Regulation (EC) No 1255/1999 of 17 May 1999
on the common organisation of the market in milk and milk products [1], and in particular Article 15 thereof,
Whereas: (1) Article 7(1) of Commission Regulation (EC) No 2799/1999 [2] fixes the amount of aid for
skimmed milk and skimmed-milk powder intended for animal feed taking into account the factors set out
in Article 11(2) of Regulation (EC) No 1255/1999. In view of the developments in the market price of
skimmed-milk powder, of the increase in the market prices for competing proteins, and of the reduction
of the supply of skimmed-milk powder, the amount of aid should be reduced. (2) Regulation (EC)
No 2799/1999 should therefore be amended accordingly. (3) The Management Committee for Milk and
Milk Products has not delivered an opinion within the time-limit set by its chairman,
HAS ADOPTED THIS REGULATION: Article 1 In Article 7 of Regulation (EC) No 2799/1999, paragraph 1 is replaced by the following: "1. Aid is fixed at: (a) EUR 1,62 per 100 kg of skimmed milk with a protein content of not less than 35,6 % of the non-fatty dry extract; (b) EUR 1,42 per 100 kg of skimmed milk with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract; (c) EUR 20,00 per 100 kg of skimmed-milk powder with a protein content of not less than 35,6 % of the non-fatty dry extract; (d) EUR 17,64 per 100 kg of skimmed-milk powder with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract." Article 2 This Regulation shall enter into force on the day following its publication in the Official Journal of the European Union. This Regulation shall be binding in its entirety and directly applicable in all Member States. Done at Brussels, 19 April 2006. For the Commission Mariann Fischer Boel Member of the Commission [1] OJ L 160, 26.6.1999, p. 48. Regulation as last amended by Regulation (EC) No 1913/2005 (OJ L 307, 25.11.2005, p. 2). [2] OJ L 340, 31.12.1999, p. 3.
Regulation as last amended by Regulation (EC) No 1194/2005 (OJ L 194, 26.7.2005, p. 7).
---
# legal_t5_small_summ_en model
Model for Summarization of legal text written in English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in English.
### How to use
Here is how to use this model to summarize legal text written in English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "THE COMMISSION OF THE EUROPEAN COMMUNITIES, Having regard to the Treaty establishing the European Community, Having regard to Council Regulation (EC) No 1255/1999 of 17 May 1999 on the common organisation of the market in milk and milk products [1], and in particular Article 15 thereof, Whereas: (1) Article 7(1) of Commission Regulation (EC) No 2799/1999 [2] fixes the amount of aid for skimmed milk and skimmed-milk powder intended for animal feed taking into account the factors set out in Article 11(2) of Regulation (EC) No 1255/1999. In view of the developments in the market price of skimmed-milk powder, of the increase in the market prices for competing proteins, and of the reduction of the supply of skimmed-milk powder, the amount of aid should be reduced. (2) Regulation (EC) No 2799/1999 should therefore be amended accordingly. (3) The Management Committee for Milk and Milk Products has not delivered an opinion within the time-limit set by its chairman, HAS ADOPTED THIS REGULATION: Article 1 In Article 7 of Regulation (EC) No 2799/1999, paragraph 1 is replaced by the following: "1. Aid is fixed at: (a) EUR 1,62 per 100 kg of skimmed milk with a protein content of not less than 35,6 % of the non-fatty dry extract; (b) EUR 1,42 per 100 kg of skimmed milk with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract; (c) EUR 20,00 per 100 kg of skimmed-milk powder with a protein content of not less than 35,6 % of the non-fatty dry extract; (d) EUR 17,64 per 100 kg of skimmed-milk powder with a protein content of not less than 31,4 % but less than 35,6 % of the non-fatty dry extract." Article 2 This Regulation shall enter into force on the day following its publication in the Official Journal of the European Union. This Regulation shall be binding in its entirety and directly applicable in all Member States. Done at Brussels, 19 April 2006. For the Commission Mariann Fischer Boel Member of the Commission [1] OJ L 160, 26.6.1999, p. 48. Regulation as last amended by Regulation (EC) No 1913/2005 (OJ L 307, 25.11.2005, p. 2). [2] OJ L 340, 31.12.1999, p. 3. Regulation as last amended by Regulation (EC) No 1194/2005 (OJ L 194, 26.7.2005, p. 7). -------------------------------------------------- "
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_summ_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_en | 78.11|68.78 |77.0|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
TransQuest/monotransquest-da-ru_en-reddit_wikiquotes | 8cd7efdbb7b13c7eae9234e6bd3f6a4a8d2ec3cc | 2021-06-03T19:09:24.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ru-en",
"transformers",
"Quality Estimation",
"monotransquest",
"DA",
"license:apache-2.0"
] | text-classification | false | TransQuest | null | TransQuest/monotransquest-da-ru_en-reddit_wikiquotes | 47 | null | transformers | 6,118 | ---
language: ru-en
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-ru_en-reddit_wikiquotes", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
anegi/autonlp-dialogue-summariztion-583416409 | b08fdc8c06f7aacd57b050158b360cd97e280683 | 2022-02-20T06:52:08.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:anegi/autonlp-data-dialogue-summariztion",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | anegi | null | anegi/autonlp-dialogue-summariztion-583416409 | 47 | 1 | transformers | 6,119 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- anegi/autonlp-data-dialogue-summariztion
co2_eq_emissions: 72.26141764997115
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 583416409
- CO2 Emissions (in grams): 72.26141764997115
## Validation Metrics
- Loss: 1.4701834917068481
- Rouge1: 47.7785
- Rouge2: 24.8518
- RougeL: 40.2231
- RougeLsum: 43.9487
- Gen Len: 18.8029
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/anegi/autonlp-dialogue-summariztion-583416409
``` |
cardiffnlp/bertweet-base-emoji | b9651816c5192d2946a5aa40d61fbe79b9268d2e | 2021-05-20T14:43:48.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/bertweet-base-emoji | 47 | 1 | transformers | 6,120 | |
cardiffnlp/twitter-roberta-base-mar2020 | 30d4b57d6e15351853a4bd693ea25b2720a3175e | 2022-02-09T11:13:09.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.03829",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-mar2020 | 47 | null | transformers | 6,121 | # Twitter March 2020 (RoBERTa-base, 94M)
This is a RoBERTa-base model trained on 94.46M tweets until the end of March 2020.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-mar2020"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def print_candidates():
for i in range(5):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
print_candidates()
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.57291 not
2) 0.14380 getting
3) 0.06983 self
4) 0.06813 fully
5) 0.02965 being
------------------------------
I keep forgetting to bring a <mask>.
1) 0.05637 book
2) 0.04557 laptop
3) 0.03842 wallet
4) 0.03824 pillow
5) 0.03485 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.59311 the
2) 0.18969 The
3) 0.04493 this
4) 0.02133 End
5) 0.00796 This
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text):
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
return features_mean
MODEL = "cardiffnlp/twitter-roberta-base-mar2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.98956 The movie was great
2) 0.96389 Just finished reading 'Embeddings in NLP'
3) 0.95678 I just ordered fried chicken 🐣
4) 0.95588 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-mar2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
``` |
deutsche-telekom/mt5-small-sum-de-mit-v1 | c7c12dbe023f38abeeeb598e06013ece248aa6e7 | 2021-08-05T10:17:20.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"de",
"dataset:swiss_text_2019",
"transformers",
"summarization",
"license:mit",
"autotrain_compatible"
] | summarization | false | deutsche-telekom | null | deutsche-telekom/mt5-small-sum-de-mit-v1 | 47 | 2 | transformers | 6,122 | ---
language:
- de
license: mit
tags:
- summarization
datasets:
- swiss_text_2019
---
# mT5-small-sum-de-mit-v1
This is a German summarization model. It is based on the multilingual T5 model [google/mt5-small](https://huggingface.co/google/mt5-small). The special characteristic of this model is that, unlike many other models, it is licensed under a permissive open source license (MIT). Among other things, this license allows commercial use.
[](https://www.welove.ai/)
This model is provided by the [One Conversation](https://www.welove.ai/)
team of [Deutsche Telekom AG](https://www.telekom.com/).
## Training
The training was conducted with the following hyperparameters:
- base model: [google/mt5-small](https://huggingface.co/google/mt5-small)
- source_prefix: `"summarize: "`
- batch size: 3 (6)
- max_source_length: 800
- max_target_length: 96
- warmup_ratio: 0.3
- number of train epochs: 10
- gradient accumulation steps: 2
- learning rate: 5e-5
## Datasets and Preprocessing
The datasets were preprocessed as follows:
The summary was tokenized with the [google/mt5-small](https://huggingface.co/google/mt5-small) tokenizer. Then only the records with no more than 94 summary tokens were selected.
This model is trained on the following dataset:
| Name | Language | Size | License
|------|----------|------|--------
| [SwissText 2019 - Train](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) | de | 84,564 | Concrete license is unclear. The data was published in the [German Text Summarization Challenge](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html).
We have permission to use the Swisstext dataset and release the resulting summarization model under MIT license (see [permission-declaration-swisstext.pdf](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-mit-v1/resolve/main/permission-declaration-swisstext.pdf)).
## Evaluation on MLSUM German Test Set (no beams)
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|-------|--------|--------|--------|----------
| deutsche-telekom/mt5-small-sum-de-mit-v1 (this) | 16.8023 | 3.5531 | 12.6884 | 14.7624
| [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946
| **[deutsche-telekom/mt5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1)** | **21.7336** | **7.2614** | **17.1323** | **19.3977**
## License
Copyright (c) 2021 Philip May, Deutsche Telekom AG
Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-mit-v1/blob/main/LICENSE) in the repository.
|
fgaim/tiroberta-pos | 68db11ed771befe4c46e0c3d04f68e4af84d8e7f | 2022-05-14T06:40:08.000Z | [
"pytorch",
"roberta",
"token-classification",
"ti",
"dataset:TLMD",
"dataset:NTC",
"transformers",
"model-index",
"autotrain_compatible"
] | token-classification | false | fgaim | null | fgaim/tiroberta-pos | 47 | 1 | transformers | 6,123 | ---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
datasets:
- TLMD
- NTC
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: tiroberta-base-pos
results:
- task:
name: Token Classification
type: token-classification
metrics:
- name: F1
type: f1
value: 0.9562
- name: Precision
type: precision
value: 0.9562
- name: Recall
type: recall
value: 0.9562
- name: Accuracy
type: accuracy
value: 0.9562
---
# Tigrinya POS tagging with TiRoBERTa
This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/tiroberta) on the NTC-v1 dataset (Tedla et al. 2016).
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Results
The model achieves the following results on the test set:
- Loss: 0.3194
- Adj Precision: 0.9219
- Adj Recall: 0.9335
- Adj F1: 0.9277
- Adj Number: 1670
- Adv Precision: 0.8297
- Adv Recall: 0.8554
- Adv F1: 0.8423
- Adv Number: 484
- Con Precision: 0.9844
- Con Recall: 0.9763
- Con F1: 0.9804
- Con Number: 972
- Fw Precision: 0.7895
- Fw Recall: 0.5357
- Fw F1: 0.6383
- Fw Number: 28
- Int Precision: 0.6552
- Int Recall: 0.7308
- Int F1: 0.6909
- Int Number: 26
- N Precision: 0.9650
- N Recall: 0.9662
- N F1: 0.9656
- N Number: 3992
- Num Precision: 0.9747
- Num Recall: 0.9665
- Num F1: 0.9706
- Num Number: 239
- N Prp Precision: 0.9308
- N Prp Recall: 0.9447
- N Prp F1: 0.9377
- N Prp Number: 470
- N V Precision: 0.9854
- N V Recall: 0.9736
- N V F1: 0.9794
- N V Number: 416
- Pre Precision: 0.9722
- Pre Recall: 0.9625
- Pre F1: 0.9673
- Pre Number: 907
- Pro Precision: 0.9448
- Pro Recall: 0.9236
- Pro F1: 0.9341
- Pro Number: 445
- Pun Precision: 1.0
- Pun Recall: 0.9994
- Pun F1: 0.9997
- Pun Number: 1607
- Unc Precision: 1.0
- Unc Recall: 0.875
- Unc F1: 0.9333
- Unc Number: 16
- V Precision: 0.8780
- V Recall: 0.9231
- V F1: 0.9
- V Number: 78
- V Aux Precision: 0.9685
- V Aux Recall: 0.9878
- V Aux F1: 0.9780
- V Aux Number: 654
- V Ger Precision: 0.9388
- V Ger Recall: 0.9571
- V Ger F1: 0.9479
- V Ger Number: 513
- V Imf Precision: 0.9634
- V Imf Recall: 0.9497
- V Imf F1: 0.9565
- V Imf Number: 914
- V Imv Precision: 0.8793
- V Imv Recall: 0.7286
- V Imv F1: 0.7969
- V Imv Number: 70
- V Prf Precision: 0.8960
- V Prf Recall: 0.9082
- V Prf F1: 0.9020
- V Prf Number: 294
- V Rel Precision: 0.9678
- V Rel Recall: 0.9538
- V Rel F1: 0.9607
- V Rel Number: 757
- Overall Precision: 0.9562
- Overall Recall: 0.9562
- Overall F1: 0.9562
- Overall Accuracy: 0.9562
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tedla, Y., Yamamoto, K. & Marasinghe, A. 2016.
Tigrinya Part-of-Speech Tagging with Morphological Patterns and the New Nagaoka Tigrinya Corpus.
International Journal Of Computer Applications 146 pp. 33-41 (2016).
```
|
huggingtweets/nytimes | cd906983b748414bc7c2a74ceb00c37eee28bbf3 | 2021-10-26T04:52:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/nytimes | 47 | null | transformers | 6,124 | ---
language: en
thumbnail: https://www.huggingtweets.com/nytimes/1635223960388/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1098244578472280064/gjkVMelR_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The New York Times</div>
<div style="text-align: center; font-size: 14px;">@nytimes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The New York Times.
| Data | The New York Times |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 503 |
| Short tweets | 0 |
| Tweets kept | 2747 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19jlccdf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nytimes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23hnup9i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23hnup9i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nytimes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
manishiitg/longformer-recruit-qa | 22381ad36f5b2dff159dd03c2793ad5219e6a917 | 2020-11-22T06:49:37.000Z | [
"pytorch",
"longformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | manishiitg | null | manishiitg/longformer-recruit-qa | 47 | null | transformers | 6,125 | Entry not found |
maximedb/mfaq-bert | af704097333296e2855a4f908bad2f78d987dc25 | 2021-10-11T08:34:03.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | maximedb | null | maximedb/mfaq-bert | 47 | null | transformers | 6,126 | Entry not found |
meghanabhange/Hinglish-Bert | 601b87008b0a4283bc93be8bc22cccdc77141c25 | 2021-05-19T23:14:48.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | meghanabhange | null | meghanabhange/Hinglish-Bert | 47 | null | transformers | 6,127 | Entry not found |
ml6team/mbart-large-cc25-cnn-dailymail-nl | d04a79b70564b8c825a4682dbb2670845fd16cc1 | 2022-05-16T11:41:37.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"nl",
"dataset:ml6team/cnn_dailymail_nl",
"transformers",
"bart",
"summarization",
"autotrain_compatible"
] | summarization | false | ml6team | null | ml6team/mbart-large-cc25-cnn-dailymail-nl | 47 | 6 | transformers | 6,128 | ---
language:
- nl
tags:
- mbart
- bart
- summarization
datasets:
- ml6team/cnn_dailymail_nl
pipeline_tag: summarization
widget:
- text: 'Het jongetje werd eind april met zwaar letsel naar het ziekenhuis gebracht in Maastricht. Drie weken later overleed het kindje als gevolg van het letsel. Onderzoek moet nog uitwijzen wat voor verwondingen de baby precies had en hoe hij gewond is geraakt. Daarnaast doet de politie onderzoek in de woning van de ouders. Het is nog niet duidelijk wanneer de onderzoeken zijn afgerond, meldt 1Limburg. De verdachten zitten in beperkingen en mogen alleen contact hebben met hun advocaat.'
- text: 'Volgens De Vries gaat het om "de hoogste beloning die ooit is uitgeloofd in Nederland". De stichting heeft een website waar donateurs geld kunnen storten, schrijft NH Nieuws. Volgens De Vries is dit initiatief ook bedoeld voor andere zaken waar beloningen voor een gouden tip worden uitgereikt. "Het is dus niet eenmalig", aldus De Vries. Het is de eerste keer dat zoiets wordt opgezet, stelt hij: De 18-jarige Tanja Groen verdween spoorloos tijdens de ontgroeningsweek van de Universiteit Maastricht in augustus 1993. Ze werd voor het laatst gezien nadat ze was vertrokken van een feestje. De studente zou vandaag 46 jaar zijn geworden. Ook de ouders van Groen waren op de persconferentie aanwezig. "Het is vandaag de verjaardag van Tanja Groen, die haar ouders al 27 jaar niet meer hebben kunnen vieren, omdat zij eind augustus 1993 spoorloos is verdwenen", zei De Vries. "Haar ouders zitten in tergende onzekerheid. Ze geloven dat ze niet meer leeft. Maar die ene promille vreet aan ze. Ze hebben recht op duidelijkheid. Ze komen op leeftijd. Grootste angst is nooit te weten wat er met hun kind is gebeurd." De Vries wil dat het miljoen binnen een jaar is ingezameld. Als het bedrag na een jaar lager uitkomt, dan is dat de uit te loven beloning. Is het meer, dan zal de rest van het geld gebruikt worden in beloningen in andere zaken. Het initiatief wordt gesteund door de politie en justitie. De afgelopen jaren is er vaker uitgebreid naar sporen van Tanja Groen gezocht, maar die zoekacties hebben niets concreets opgeleverd. Vorige week werd opnieuw naar de vrouw gezocht, op de Strabrechtse Heide in Noord-Brabant. Ook die zoektocht leverde niets op.'
---
# mbart-large-cc25-cnn-dailymail-nl
## Model description
Finetuned version of [mbart](https://huggingface.co/facebook/mbart-large-cc25). We also wrote a **blog post** about this model [here](https://blog.ml6.eu/why-we-open-sourced-two-dutch-summarization-datasets-1047445abc97)
## Intended uses & limitations
It's meant for summarizing Dutch news articles.
#### How to use
```python
import transformers
undisputed_best_model = transformers.MBartForConditionalGeneration.from_pretrained(
"ml6team/mbart-large-cc25-cnn-dailymail-nl"
)
tokenizer = transformers.MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
summarization_pipeline = transformers.pipeline(
task="summarization",
model=undisputed_best_model,
tokenizer=tokenizer,
)
summarization_pipeline.model.config.decoder_start_token_id = tokenizer.lang_code_to_id[
"nl_XX"
]
article = "Kan je dit even samenvatten alsjeblief." # Dutch
summarization_pipeline(
article,
do_sample=True,
top_p=0.75,
top_k=50,
# num_beams=4,
min_length=50,
early_stopping=True,
truncation=True,
)[0]["summary_text"]
```
## Training data
Finetuned [mbart](https://huggingface.co/facebook/mbart-large-cc25) with [this dataset](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl)
|
mrm8488/camembert-base-finetuned-pawsx-fr | 4d6091ae8d9bbe561994dbe97cf10f8963aec6da | 2021-04-28T15:51:53.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"dataset:xtreme",
"transformers",
"nli"
] | text-classification | false | mrm8488 | null | mrm8488/camembert-base-finetuned-pawsx-fr | 47 | null | transformers | 6,129 | ---
language: fr
datasets:
- xtreme
tags:
- nli
widget:
- text: "La première série a été mieux reçue par la critique que la seconde. La seconde série a été bien accueillie par la critique, mieux que la première."
---
# Camembert-base fine-tuned on PAWS-X-fr for Paraphrase Identification (NLI)
|
qqhann/wav2vec2-large-xlsr-japanese-0325-1200 | 5518a1477809d3523f50d931af7a86057ebda009 | 2021-03-29T10:26:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | qqhann | null | qqhann/wav2vec2-large-xlsr-japanese-0325-1200 | 47 | null | transformers | 6,130 | ---
language: ja
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Japanese XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ja
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: { wer_result_on_test } #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
---
# Wav2Vec2-Large-XLSR-53-{language} #TODO: replace language with your {language}, _e.g._ French
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, _e.g._ French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, _e.g._ French
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: XX.XX %
<!-- # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags. -->
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ...
<!-- # TODO: adapt to state all the datasets that were used for training. -->
The script used for training can be found [here](...)
<!-- # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. -->
|
replydotai/albert-xxlarge-v1-finetuned-squad2 | 81ae35092cf89980882fb4f3a3a41a41358ddcd1 | 2020-04-24T16:05:36.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | replydotai | null | replydotai/albert-xxlarge-v1-finetuned-squad2 | 47 | null | transformers | 6,131 | Entry not found |
ttop324/kogpt2novel | edd5bebf92672e4544c01d153d2a8adc9a5ce771 | 2021-09-23T16:41:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ko",
"transformers",
"license:cc-by-nc-sa-4.0"
] | text-generation | false | ttop324 | null | ttop324/kogpt2novel | 47 | 0 | transformers | 6,132 | ---
language: ko
tags:
- gpt2
license: cc-by-nc-sa-4.0
---
novel finetuned from skt/kogpt2-base-v2 |
uer/chinese_roberta_L-6_H-768 | 126f267ff7e4855d384702d3fea641bfa42b3356 | 2022-07-15T08:13:34.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-6_H-768 | 47 | null | transformers | 6,133 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
usc-isi/sbert-roberta-large-anli-mnli-snli | 922af2f0087b6fae99f3d1705f1aa6495ac7656e | 2021-12-05T21:04:27.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"en",
"dataset:anli",
"dataset:multi_nli",
"dataset:snli",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | usc-isi | null | usc-isi/sbert-roberta-large-anli-mnli-snli | 47 | null | sentence-transformers | 6,134 | ---
language:
- en
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- anli
- multi_nli
- snli
---
# sbert-roberta-large-anli-mnli-snli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The model is weight initialized by RoBERTa-large and trained on ANLI (Nie et al., 2020), MNLI (Williams et al., 2018), and SNLI (Bowman et al., 2015) using the [`training_nli.py`](https://github.com/UKPLab/sentence-transformers/blob/v0.3.5/examples/training/nli/training_nli.py) example script.
Training Details:
- Learning rate: 2e-5
- Batch size: 8
- Pooling: Mean
- Training time: ~20 hours on one [NVIDIA GeForce RTX 2080 Ti](https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```bash
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer("usc-isi/sbert-roberta-large-anli-mnli-snli")
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (Hugging Face Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: first, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
import torch
from transformers import AutoModel, AutoTokenizer
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["This is an example sentence", "Each sentence is converted"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("usc-isi/sbert-roberta-large-anli-mnli-snli")
model = AutoModel.from_pretrained("usc-isi/sbert-roberta-large-anli-mnli-snli")
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
See section 4.1 of our paper for evaluation results.
## Full Model Architecture
```text
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
For more information about the project, see our paper:
> Ciosici, Manuel, et al. "Machine-Assisted Script Curation." _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations_, Association for Computational Linguistics, 2021, pp. 8–17. _ACLWeb_, <https://www.aclweb.org/anthology/2021.naacl-demos.2>.
## References
- Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. [A large annotated corpus for learning natural language inference](https://doi.org/10.18653/v1/D15-1075). In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
- Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. [AdversarialNLI: A new benchmark for natural language understanding](https://doi.org/10.18653/v1/2020.acl-main.441). In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 4885–4901, Online. Association for Computational Linguistics.
- Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. [A broad-coverage challenge corpus for sentence understanding through inference](https://doi.org/10.18653/v1/N18-1101). In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_, pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
|
w11wo/wav2vec2-xls-r-300m-zh-HK-lm-v2 | 563903f40dcefc6237b7ad93eea93948a6b95f7d | 2022-03-23T18:33:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"zh-HK",
"dataset:common_voice",
"arxiv:2111.09296",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | w11wo | null | w11wo/wav2vec2-xls-r-300m-zh-HK-lm-v2 | 47 | null | transformers | 6,135 | ---
language: zh-HK
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: Wav2Vec2 XLS-R 300M Cantonese (zh-HK) LM
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 24.09
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 23.1
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 23.02
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 56.86
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 55.76
---
# Wav2Vec2 XLS-R 300M Cantonese (zh-HK) LM
Wav2Vec2 XLS-R 300M Cantonese (zh-HK) LM is an automatic speech recognition model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the `zh-HK` subset of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. A 5-gram Language model, trained on multiple [PyCantonese](https://pycantonese.org/data.html) corpora, was then subsequently added to this model.
This model was trained using HuggingFace's PyTorch framework and is part of the [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by HuggingFace. All training was done on a Tesla V100, sponsored by OVH.
All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-lm-v2/tree/main) tab, as well as the [Training metrics](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-lm-v2/tensorboard) logged via Tensorboard.
As for the N-gram language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by HuggingFace.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| --------------------------------- | ------- | ----- | ------------------------------- |
| `wav2vec2-xls-r-300m-zh-HK-lm-v2` | 300M | XLS-R | `Common Voice zh-HK` Dataset |
## Evaluation Results
The model achieves the following results on evaluation without a language model:
| Dataset | CER |
| -------------------------------- | ------ |
| `Common Voice` | 31.73% |
| `Common Voice 7` | 23.11% |
| `Common Voice 8` | 23.02% |
| `Robust Speech Event - Dev Data` | 56.60% |
With the addition of the language model, it achieves the following results:
| Dataset | CER |
| -------------------------------- | ------ |
| `Common Voice` | 24.09% |
| `Common Voice 7` | 23.10% |
| `Common Voice 8` | 23.02% |
| `Robust Speech Event - Dev Data` | 56.86% |
## Training procedure
The training process did not involve the addition of a language model. The following results were simply lifted from the original automatic speech recognition [model training](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-v2).
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 0.0001
- `train_batch_size`: 8
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 32
- `optimizer`: Adam with `betas=(0.9, 0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_steps`: 2000
- `num_epochs`: 100.0
- `mixed_precision_training`: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
| :-----------: | :---: | :---: | :-------------: | :----: | :----: |
| 69.8341 | 1.34 | 500 | 80.0722 | 1.0 | 1.0 |
| 6.6418 | 2.68 | 1000 | 6.6346 | 1.0 | 1.0 |
| 6.2419 | 4.02 | 1500 | 6.2909 | 1.0 | 1.0 |
| 6.0813 | 5.36 | 2000 | 6.1150 | 1.0 | 1.0 |
| 5.9677 | 6.7 | 2500 | 6.0301 | 1.1386 | 1.0028 |
| 5.9296 | 8.04 | 3000 | 5.8975 | 1.2113 | 1.0058 |
| 5.6434 | 9.38 | 3500 | 5.5404 | 2.1624 | 1.0171 |
| 5.1974 | 10.72 | 4000 | 4.5440 | 2.1702 | 0.9366 |
| 4.3601 | 12.06 | 4500 | 3.3839 | 2.2464 | 0.8998 |
| 3.9321 | 13.4 | 5000 | 2.8785 | 2.3097 | 0.8400 |
| 3.6462 | 14.74 | 5500 | 2.5108 | 1.9623 | 0.6663 |
| 3.5156 | 16.09 | 6000 | 2.2790 | 1.6479 | 0.5706 |
| 3.32 | 17.43 | 6500 | 2.1450 | 1.8337 | 0.6244 |
| 3.1918 | 18.77 | 7000 | 1.8536 | 1.9394 | 0.6017 |
| 3.1139 | 20.11 | 7500 | 1.7205 | 1.9112 | 0.5638 |
| 2.8995 | 21.45 | 8000 | 1.5478 | 1.0624 | 0.3250 |
| 2.7572 | 22.79 | 8500 | 1.4068 | 1.1412 | 0.3367 |
| 2.6881 | 24.13 | 9000 | 1.3312 | 2.0100 | 0.5683 |
| 2.5993 | 25.47 | 9500 | 1.2553 | 2.0039 | 0.6450 |
| 2.5304 | 26.81 | 10000 | 1.2422 | 2.0394 | 0.5789 |
| 2.4352 | 28.15 | 10500 | 1.1582 | 1.9970 | 0.5507 |
| 2.3795 | 29.49 | 11000 | 1.1160 | 1.8255 | 0.4844 |
| 2.3287 | 30.83 | 11500 | 1.0775 | 1.4123 | 0.3780 |
| 2.2622 | 32.17 | 12000 | 1.0704 | 1.7445 | 0.4894 |
| 2.2225 | 33.51 | 12500 | 1.0272 | 1.7237 | 0.5058 |
| 2.1843 | 34.85 | 13000 | 0.9756 | 1.8042 | 0.5028 |
| 2.1 | 36.19 | 13500 | 0.9527 | 1.8909 | 0.6055 |
| 2.0741 | 37.53 | 14000 | 0.9418 | 1.9026 | 0.5880 |
| 2.0179 | 38.87 | 14500 | 0.9363 | 1.7977 | 0.5246 |
| 2.0615 | 40.21 | 15000 | 0.9635 | 1.8112 | 0.5599 |
| 1.9448 | 41.55 | 15500 | 0.9249 | 1.7250 | 0.4914 |
| 1.8966 | 42.89 | 16000 | 0.9023 | 1.5829 | 0.4319 |
| 1.8662 | 44.24 | 16500 | 0.9002 | 1.4833 | 0.4230 |
| 1.8136 | 45.58 | 17000 | 0.9076 | 1.1828 | 0.2987 |
| 1.7908 | 46.92 | 17500 | 0.8774 | 1.5773 | 0.4258 |
| 1.7354 | 48.26 | 18000 | 0.8727 | 1.5037 | 0.4024 |
| 1.6739 | 49.6 | 18500 | 0.8636 | 1.1239 | 0.2789 |
| 1.6457 | 50.94 | 19000 | 0.8516 | 1.2269 | 0.3104 |
| 1.5847 | 52.28 | 19500 | 0.8399 | 1.3309 | 0.3360 |
| 1.5971 | 53.62 | 20000 | 0.8441 | 1.3153 | 0.3335 |
| 1.602 | 54.96 | 20500 | 0.8590 | 1.2932 | 0.3433 |
| 1.5063 | 56.3 | 21000 | 0.8334 | 1.1312 | 0.2875 |
| 1.4631 | 57.64 | 21500 | 0.8474 | 1.1698 | 0.2999 |
| 1.4997 | 58.98 | 22000 | 0.8638 | 1.4279 | 0.3854 |
| 1.4301 | 60.32 | 22500 | 0.8550 | 1.2737 | 0.3300 |
| 1.3798 | 61.66 | 23000 | 0.8266 | 1.1802 | 0.2934 |
| 1.3454 | 63.0 | 23500 | 0.8235 | 1.3816 | 0.3711 |
| 1.3678 | 64.34 | 24000 | 0.8550 | 1.6427 | 0.5035 |
| 1.3761 | 65.68 | 24500 | 0.8510 | 1.6709 | 0.4907 |
| 1.2668 | 67.02 | 25000 | 0.8515 | 1.5842 | 0.4505 |
| 1.2835 | 68.36 | 25500 | 0.8283 | 1.5353 | 0.4221 |
| 1.2961 | 69.7 | 26000 | 0.8339 | 1.5743 | 0.4369 |
| 1.2656 | 71.05 | 26500 | 0.8331 | 1.5331 | 0.4217 |
| 1.2556 | 72.39 | 27000 | 0.8242 | 1.4708 | 0.4109 |
| 1.2043 | 73.73 | 27500 | 0.8245 | 1.4469 | 0.4031 |
| 1.2722 | 75.07 | 28000 | 0.8202 | 1.4924 | 0.4096 |
| 1.202 | 76.41 | 28500 | 0.8290 | 1.3807 | 0.3719 |
| 1.1679 | 77.75 | 29000 | 0.8195 | 1.4097 | 0.3749 |
| 1.1967 | 79.09 | 29500 | 0.8059 | 1.2074 | 0.3077 |
| 1.1241 | 80.43 | 30000 | 0.8137 | 1.2451 | 0.3270 |
| 1.1414 | 81.77 | 30500 | 0.8117 | 1.2031 | 0.3121 |
| 1.132 | 83.11 | 31000 | 0.8234 | 1.4266 | 0.3901 |
| 1.0982 | 84.45 | 31500 | 0.8064 | 1.3712 | 0.3607 |
| 1.0797 | 85.79 | 32000 | 0.8167 | 1.3356 | 0.3562 |
| 1.0119 | 87.13 | 32500 | 0.8215 | 1.2754 | 0.3268 |
| 1.0216 | 88.47 | 33000 | 0.8163 | 1.2512 | 0.3184 |
| 1.0375 | 89.81 | 33500 | 0.8137 | 1.2685 | 0.3290 |
| 0.9794 | 91.15 | 34000 | 0.8220 | 1.2724 | 0.3255 |
| 1.0207 | 92.49 | 34500 | 0.8165 | 1.2906 | 0.3361 |
| 1.0169 | 93.83 | 35000 | 0.8153 | 1.2819 | 0.3305 |
| 1.0127 | 95.17 | 35500 | 0.8187 | 1.2832 | 0.3252 |
| 0.9978 | 96.51 | 36000 | 0.8111 | 1.2612 | 0.3210 |
| 0.9923 | 97.85 | 36500 | 0.8076 | 1.2278 | 0.3122 |
| 1.0451 | 99.2 | 37000 | 0.8086 | 1.2451 | 0.3156 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 XLS-R 300M Cantonese (zh-HK) LM was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on OVH Cloud.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
yangheng/deberta-v3-large-absa | 9b4cc04cb1bad25805ecb1086081c294388702b5 | 2022-04-22T20:51:51.000Z | [
"pytorch",
"deberta-v2",
"en",
"dataset:laptop14 (w/ augmentation)",
"dataset:restaurant14 (w/ augmentation)",
"dataset:restaurant16 (w/ augmentation)",
"dataset:ACL-Twitter (w/ augmentation)",
"dataset:MAMS (w/ augmentation)",
"dataset:Television (w/ augmentation)",
"dataset:TShirt (w/ augmentation)",
"dataset:Yelp (w/ augmentation)",
"arxiv:2110.08604",
"transformers",
"aspect-based-sentiment-analysis",
"lcf-bert",
"license:mit"
] | null | false | yangheng | null | yangheng/deberta-v3-large-absa | 47 | 1 | transformers | 6,136 | ---
language:
- en
tags:
- aspect-based-sentiment-analysis
- lcf-bert
license: mit
datasets:
- laptop14 (w/ augmentation)
- restaurant14 (w/ augmentation)
- restaurant16 (w/ augmentation)
- ACL-Twitter (w/ augmentation)
- MAMS (w/ augmentation)
- Television (w/ augmentation)
- TShirt (w/ augmentation)
- Yelp (w/ augmentation)
metrics:
- accuracy
- macro-f1
---
# Note
This model is training with 180k+ ABSA samples, see [ABSADatasets](https://github.com/yangheng95/ABSADatasets). Yet the test sets are not included in pre-training, so you can use this model for training and benchmarking on common ABSA datasets, e.g., Laptop14, Rest14 datasets. (Except for the Rest15 dataset!)
# DeBERTa for aspect-based sentiment analysis
The `deberta-v3-large-absa` model for aspect-based sentiment analysis, trained with English datasets from [ABSADatasets](https://github.com/yangheng95/ABSADatasets).
## Training Model
This model is trained based on the FAST-LSA-T model with `microsoft/deberta-v3-large`, which comes from [PyABSA](https://github.com/yangheng95/PyABSA).
To track state-of-the-art models, please see [PyASBA](https://github.com/yangheng95/PyABSA).
## Usage
```python3
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-large-absa")
model = AutoModel.from_pretrained("yangheng/deberta-v3-large-absa")
inputs = tokenizer("good product especially video and audio quality fantastic.", return_tensors="pt")
outputs = model(**inputs)
```
## Example in PyASBA
An [example](https://github.com/yangheng95/PyABSA/blob/release/demos/aspect_polarity_classification/train_apc_multilingual.py) for using FAST-LSA-T in PyASBA
## Datasets
This model is fine-tuned with 180k examples for the ABSA dataset (including augmented data). Training dataset files:
```
loading: integrated_datasets/apc_datasets/SemEval/laptop14/Laptops_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/laptop14/0.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/laptop14/1.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/laptop14/2.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/laptop14/3.cross_boost.fast_lcf_bert_Laptop14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/Restaurants_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/0.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/1.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/2.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/3.cross_boost.fast_lcf_bert_Restaurant14_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/restaurant_train.raw
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/0.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/1.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/2.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/3.cross_boost.fast_lcf_bert_Restaurant16_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/train.raw
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/0.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/1.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/2.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/3.cross_boost.fast_lcf_bert_Twitter_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/train.xml.dat
loading: integrated_datasets/apc_datasets/MAMS/0.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/1.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/2.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/MAMS/3.cross_boost.fast_lcf_bert_MAMS_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/Television_Train.xml.seg
loading: integrated_datasets/apc_datasets/Television/0.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/1.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/2.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Television/3.cross_boost.fast_lcf_bert_Television_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/Menstshirt_Train.xml.seg
loading: integrated_datasets/apc_datasets/TShirt/0.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/1.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/2.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/TShirt/3.cross_boost.fast_lcf_bert_TShirt_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/yelp.train.txt
loading: integrated_datasets/apc_datasets/Yelp/0.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/1.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/2.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
loading: integrated_datasets/apc_datasets/Yelp/3.cross_boost.fast_lcf_bert_Yelp_deberta-v3-base.train.augment
```
If you use this model in your research, please cite our paper:
```
@article{YangZMT21,
author = {Heng Yang and
Biqing Zeng and
Mayi Xu and
Tianxing Wang},
title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable
Sentiment Dependency Learning},
journal = {CoRR},
volume = {abs/2110.08604},
year = {2021},
url = {https://arxiv.org/abs/2110.08604},
eprinttype = {arXiv},
eprint = {2110.08604},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-08604.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
edbeeching/decision-transformer-gym-walker2d-medium | da2238c5041fd7fb197d6305f668880454b2f3d4 | 2022-06-29T19:21:47.000Z | [
"pytorch",
"decision_transformer",
"feature-extraction",
"arxiv:2106.01345",
"transformers",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control"
] | reinforcement-learning | false | edbeeching | null | edbeeching/decision-transformer-gym-walker2d-medium | 47 | null | transformers | 6,137 | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium trajectories sampled from the Gym Walker2d environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium trajectories sampled from the Gym Walker2d environment.
The following normlization coeficients are required to use this model:
mean = [ 1.218966, 0.14163373, -0.03704914, -0.1381431, 0.51382244, -0.0471911, -0.47288352, 0.04225416, 2.3948874, -0.03143199, 0.04466356, -0.02390724, -0.10134014, 0.09090938, -0.00419264, -0.12120572, -0.5497064]
std = [0.12311358, 0.324188, 0.11456084, 0.26230657, 0.5640279, 0.22718786, 0.38373196, 0.7373677, 1.2387927, 0.7980206, 1.5664079, 1.8092705, 3.0256042, 4.062486, 1.4586568, 3.744569, 5.585129 ]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
Alvenir/bert-punct-restoration-da | 922d3021a7f7533bb094ea7fba8e26b640586c52 | 2022-03-23T09:05:15.000Z | [
"pytorch",
"bert",
"token-classification",
"da",
"dataset:custom",
"transformers",
"punctuation restoration",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | Alvenir | null | Alvenir/bert-punct-restoration-da | 47 | 1 | transformers | 6,138 | ---
language: da
tags:
- bert
- punctuation restoration
license: apache-2.0
datasets:
- custom
---
# Bert Punctuation Restoration Danish
This model performs the punctuation restoration task in Danish. The method used is sequence classification similar to how NER models
are trained.
## Model description
TODO
### How to use
The model requires some additional inference code, hence we created an awesome little pip package for inference.
The inference code is based on the `TokenClassificationPipeline` pipeline from huggingface.
First, install the little package by running
```
pip install punctfix
```
Then restoration is as simple as the following snippet:
```python
>>> from punctfix import PunctFixer
>>> fixer = PunctFixer(language="da")
>>> example_text = "mit navn det er rasmus og jeg kommer fra firmaet alvenir det er mig som har trænet denne lækre model"
>>> print(fixer.punctuate(example_text))
'Mit navn det er Rasmus og jeg kommer fra firmaet Alvenir. Det er mig som har trænet denne lækre model.'
>>> example_text = "en dag bliver vi sku glade for at vi nu kan sætte punktummer og kommaer i en sætning det fungerer da meget godt ikke"
>>> print(fixer.punctuate(example_text))
'En dag bliver vi sku glade for, at vi nu kan sætte punktummer og kommaer i en sætning. Det fungerer da meget godt, ikke?'
```
## Training data
To Do
## Training procedure
To Do
### Preprocessing
TODO
## Evaluation results
TODO
|
nielsr/convnext-tiny-finetuned-eurosat | 4f3580e91046d22e9917fd94a2a36e23314e80fd | 2022-04-05T07:25:05.000Z | [
"pytorch",
"convnext",
"image-classification",
"transformers"
] | image-classification | false | nielsr | null | nielsr/convnext-tiny-finetuned-eurosat | 47 | null | transformers | 6,139 | Entry not found |
yhavinga/t5-base-36L-ccmatrix-multi | 27e97747e4680d1569eda7f680391e5cec0586ce | 2022-06-14T10:29:36.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"nl",
"en",
"dataset:yhavinga/mc4_nl_cleaned",
"dataset:yhavinga/ccmatrix",
"transformers",
"translation",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | yhavinga | null | yhavinga/t5-base-36L-ccmatrix-multi | 47 | null | transformers | 6,140 | ---
language:
- nl
- en
datasets:
- yhavinga/mc4_nl_cleaned
- yhavinga/ccmatrix
tags:
- t5
- translation
- seq2seq
pipeline_tag: translation
widget:
- text: "It is a painful and tragic spectacle that rises before me: I have drawn back the curtain from the rottenness of man. This word, in my mouth, is at least free from one suspicion: that it involves a moral accusation against humanity."
- text: "For once Fletcher’s sedate features showed a certain lightness. 'I believe I will linger awhile longer.' He indicated a holoscreen which was displaying the image from an external camera. Cloud-splattered landscape was rolling past, pastel greens, browns, and blues illuminated by Duke’s radiance. 'It is not often a mortal man is permitted to view a world over the shoulder of angels.'"
license: apache-2.0
---
# t5-base-36L-ccmatrix-multi
A [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) model finetuned for Dutch to English and English to Dutch translation on the CCMatrix dataset.
Evaluation metrics of this model are listed in the **Translation models** section below.
You can use this model directly with a pipeline for text translation:
```python
model_name = "yhavinga/t5-base-36L-ccmatrix-multi"
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
from transformers import pipeline
import torch
device_num = 0 if torch.cuda.is_available() else -1
device = "cpu" if device_num < 0 else f"cuda:{device_num}"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
params = {"max_length": 128, "num_beams": 4, "early_stopping": True}
en_to_nl = pipeline("translation_en_to_nl", tokenizer=tokenizer, model=model, device=device_num)
print(en_to_nl("""Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible.""",
**params)[0]['translation_text'])
nl_to_en = pipeline("translation_nl_to_en", tokenizer=tokenizer, model=model, device=device_num)
print(nl_to_en("""De jonge Wehling zat gebogen in zijn stoel, zijn hoofd in zijn hand. Hij was zo stoffig, zo stil en kleurloos dat hij vrijwel onzichtbaar was.""",
**params)[0]['translation_text'])
```
This **t5 eff** model has **728M** parameters.
It was pre-trained with the masked language modeling objective on the dataset
`mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **17d15h**,
with a sequence length of **512**, batch size **512** and **212963** total steps (**56B** tokens).
Pre-training evaluation loss and accuracy are **1,05** and **0,76**.
Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation.
## Tokenizer
The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers
and has 32003 tokens.
It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
See [./raw/main/tokenizer.json](tokenizer.json) for details.
## Dataset(s)
All models listed below are pre-trained on
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4.
The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix).
## Dutch T5 Models
Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models).
`t5-base-dutch` is the only model with an original T5 config.
The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function,
and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`).
The T5-eff models are models that differ in their number of layers. The table will list
the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient
`t5-xl-4L-dutch-english-cased`.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) |
|:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------|
| *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff |
| *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 |
| *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 |
| *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 |
| *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 |
| *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 |
| *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M |
| *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 |
| *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl |
| *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 |
| *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 |
| *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 |
| *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 |
| *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h |
| *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor |
| *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 |
| *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 |
| *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 |
| *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 |
## Evaluation
Most models from the list above have been evaluated on summarization and translation.
The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better)
and y-axis the summarization Rouge1 translation score (higher is better).
Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is
plotted as bleu.

The next two sections provide more information on how the evaluation was performed.
## Evaluation on summarization
The models below have been evaluated for summarization on 50K samples from the CNN Dailymail dataset.
All models were fine-tuned with the AdamW optimizer with a batch size of 128 and constant learning rate of 1e-3 after a
warmup of 32 steps, with a label smoothing factor of 0.05. Article and summary token lengths were set to 1024 and 142.
NB: the evaluation checkpoints are not saved, since they were trained for comparison of pre-trained models only.
The numbers reported are the Rouge scores on 1000 documents from the test split. The rouge1 score is visualized in the
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 |
| *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 |
| *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 |
| *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 |
| *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 |
## Evaluation on translation
The models below have been evaluated for English to Dutch translation on 50K samples from the CCMatrix dataset.
Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because
the translation direction is English to Dutch.
All models were fine-tuned with the AdamW optimizer with a batch size of 128 and constant learning rate of 5e-5 after a
warmup of 32 steps, with a label smoothing factor of 0.1 and maximum sequence length of 128 tokens.
The numbers reported are the Bleu scores on 1000 documents from the test split.
NB: the evaluation checkpoints are not saved, since they were trained for comparison of pre-trained models only.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 |
| *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 |
| *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 |
| *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 |
| *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 |
| *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 |
| *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 |
## Translation models
The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language
directions on the first 25M samples from CCMatrix, giving a total of 50M training samples.
Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books.
The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score
averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions.
| | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) |
|:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------|
| *source_lang* | en | nl | en | nl |
| *target_lang* | nl | en | nl | en |
| *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: |
| *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** |
| *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 |
| *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 |
| *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 |
| *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 |
| *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 |
| *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 |
| *max_source_length* | 128 | 128 | 128 | 128 |
| *max_target_length* | 128 | 128 | 128 | 128 |
| *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 |
| *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 |
| *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 |
| *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 |
| *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 |
| *train_batch_size* | 128 | 128 | 128 | 128 |
| *warmup_steps* | 2000 | 2000 | 2000 | 2000 |
| *total steps* | 390625 | 390625 | 390625 | 390625 |
| *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h |
| *num parameters* | 729M | 729M | 250M | 250M |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts
of the training. Weights & Biases made it possible to keep track of many training sessions
and orchestrate hyper-parameter sweeps with insightful visualizations.
The following repositories where helpful in setting up the TPU-VM,
and getting an idea what sensible hyper-parameters are for training gpt2 from scratch:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
MEDT/Chatbot_Medium | 782a9bd500243b26d1d12def0cc4e8ff2a6e0c7c | 2022-04-24T15:30:28.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"gpt2",
"text-generation",
"arxiv:1911.00536",
"transformers",
"conversational",
"license:mit"
] | conversational | false | MEDT | null | MEDT/Chatbot_Medium | 47 | null | transformers | 6,141 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Does money buy happiness? |
| Bot | Depends how much money you spend on it .|
|User | What is the best way to buy happiness ? |
| Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
|User |This is so difficult ! |
| Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
alice-hml/mBERT_grammatical_error_tagger | 26ab66e1b2bf6916cb68aba45c66a6b2d556cadb | 2022-05-26T13:30:52.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:other",
"autotrain_compatible"
] | token-classification | false | alice-hml | null | alice-hml/mBERT_grammatical_error_tagger | 47 | null | transformers | 6,142 | ---
license: other
---
|
bigscience-biomedical/bigbio-mtl | aa938e5e1b0e88780212b635baa65f22668fadb3 | 2022-06-05T14:50:03.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | bigscience-biomedical | null | bigscience-biomedical/bigbio-mtl | 47 | null | transformers | 6,143 | Entry not found |
Nonnyss/music-wav2vec2-th-finetune | 098b3684b11cd4184bb40d4d8bde74b86083eb60 | 2022-06-22T07:27:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Nonnyss | null | Nonnyss/music-wav2vec2-th-finetune | 47 | null | transformers | 6,144 | Entry not found |
Taeham/wav2vec2-ksponspeech | 6e79567299694080277a921b108b1600c80e53d2 | 2022-06-21T11:49:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Taeham | null | Taeham/wav2vec2-ksponspeech | 47 | null | transformers | 6,145 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-ksponspeech
results: []
---
# wav2vec2-ksponspeech
This model is a fine-tuned version of [Wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- **WER(Word Error Rate)** for Third party test data : 0.373
**For improving WER:**
- Numeric / Character Unification
- Decoding the word with the correct notation (from word based on pronounciation)
- Uniform use of special characters (. / ?)
- Converting non-existent words to existing words
## Model description
Korean Wav2vec with Ksponspeech dataset.
This model was trained by two dataset :
- Train1 : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-train (1 ~ 20000th data in Ksponspeech)
- Train2 : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-train2 (20100 ~ 40100th data in Ksponspeech)
- Validation : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-test (20000 ~ 20100th data in Ksponspeech)
- Third party test : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-test (60000 ~ 20100th data in Ksponspeech)
### Hardward Specification
- GPU : GEFORCE RTX 3080ti 12GB
- CPU : Intel i9-12900k
- RAM : 32GB
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
QCRI/bert-base-cased-chunking | 740e1c79e776d4048aef1eac0954f82cf5d29203 | 2022-06-13T08:31:16.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | token-classification | false | QCRI | null | QCRI/bert-base-cased-chunking | 47 | null | transformers | 6,146 | ---
license: cc-by-nc-4.0
---
|
Yvanzhu/Data-to-text-generation-accelerate | 8c6a6e0fa5ea0ea00abe69c3fe7e68cd3c48106b | 2022-06-19T09:45:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Yvanzhu | null | Yvanzhu/Data-to-text-generation-accelerate | 47 | null | transformers | 6,147 | Entry not found |
romainlhardy/bert-finetuned-ner | 1cc035f97e89303e3d5c18d56e78794f981fa19f | 2022-06-26T04:50:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | romainlhardy | null | romainlhardy/bert-finetuned-ner | 47 | null | transformers | 6,148 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9292895994725564
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.9389624448330418
- name: Accuracy
type: accuracy
value: 0.9863572143403779
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Precision: 0.9293
- Recall: 0.9488
- F1: 0.9390
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0827 | 1.0 | 1756 | 0.0639 | 0.9167 | 0.9359 | 0.9262 | 0.9828 |
| 0.0413 | 2.0 | 3512 | 0.0565 | 0.9262 | 0.9465 | 0.9362 | 0.9859 |
| 0.0188 | 3.0 | 5268 | 0.0602 | 0.9293 | 0.9488 | 0.9390 | 0.9864 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
turingmachine/hupd-t5-small | c7d01aa85a4603a2cca8c18c914935ee4202ec3f | 2022-07-05T15:28:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:HUPD/hupd",
"transformers",
"hupd",
"summarization",
"conditional-generation",
"patents",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | summarization | false | turingmachine | null | turingmachine/hupd-t5-small | 47 | 1 | transformers | 6,149 | ---
language:
- en
tags:
- hupd
- t5
- summarization
- conditional-generation
- patents
license: cc-by-sa-4.0
datasets:
- HUPD/hupd
---
# HUPD T5-Small Summarization Model
This HUPD T5-Small summarization model was fine-tuned on the HUPD dataset. It was originally introduced in [this paper](TBD).
For more information about the Harvard USPTO Patent Dataset, please feel free to visit the [project website](https://patentdataset.org/) or the [project's GitHub repository](https://github.com/suzgunmirac/hupd).
### How to Use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
summarizer = pipeline(task="summarization", model="turingmachine/hupd-t5-small")
TEXT = "1. An optical coherent receiver for an optical communication network, said optical coherent receiver being configured to receive a modulated optical signal and to process said modulated optical signal for generating an in-phase component and a quadrature component, said in-phase component and said quadrature component being electrical signals, said optical coherent receiver comprising a power adjuster in turn comprising: a multiplying unit configured to multiply said in-phase component by an in-phase gain thereby providing a power-adjusted in-phase component, and to multiply said quadrature component by a quadrature gain thereby providing a power-adjusted quadrature component; and a digital circuit connected between output and input of said multiplying unit and configured to compute: a common gain indicative of a sum of a power of said power-adjusted in-phase component and a power of said power-adjusted quadrature component, and a differential gain indicative of a difference between said power of said power-adjusted in-phase component and said power of said power-adjusted quadrature component; and said in-phase gain as a product between said common gain and said differential gain, and said quadrature gain as a ratio between said common gain and said differential gain. 2. An optical coherent receiver according to claim 1, wherein it further comprises an analog-to-digital unit connected at the input of said power adjuster, said analog-to-digital unit being configured to ..."
summarizer(TEXT)
```
Here is the output:
```python
[{'summary_text': 'An optical coherent receiver for an optical communication network includes a power adjuster and a digital circuit connected between output and input of the multiplying unit and configured to compute a common gain indicative of a sum of the power of an in-phase component and the power-adjusted quadrature component, and the differential gain as a product between the common gain and the diffractive gain.'}]
```
Alternatively, you can load the model and use it as follows:
```python
import torch
from transformers import AutoTokenizer, AutoModelWithLMHead
# cuda/cpu
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained("turingmachine/hupd-t5-small")
model = AutoModelWithLMHead.from_pretrained("turingmachine/hupd-t5-small").to(device)
inputs = tokenizer(TEXT, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model.generate(inputs.input_ids, max_new_tokens=256)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Citation
For more information, please take a look at the original paper.
* Paper: [The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications](TBD)
* Authors: *Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, and Stuart M. Shieber*
* BibTeX:
```
@article{suzgun2022hupd,
title={The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications},
author={Suzgun, Mirac and Melas-Kyriazi, Luke and Sarkar, Suproteem K and Kominers, Scott and Shieber, Stuart},
year={2022}
}
``` |
ClassCat/gpt2-base-french | 902ec822995ce12f979d0a5277ee9c2a1b610df1 | 2022-07-21T09:04:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"fr",
"dataset:wikipedia",
"dataset:cc100",
"transformers",
"license:cc-by-sa-4.0"
] | text-generation | false | ClassCat | null | ClassCat/gpt2-base-french | 47 | 1 | transformers | 6,150 | ---
language: fr
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "Je vais à la gare, et"
- text: "J'aime le café, donc"
- text: "Nous avons parlé"
- text: "Je m'appelle"
---
## GPT2 French base model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses GPT2 base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* [wiki40b/fr](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bfr) (French Wikipedia)
* Subset of [CC-100/fr](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
generator = pipeline('text-generation', model='ClassCat/gpt2-base-french')
generator("Je vais à la", max_length=50, num_return_sequences=5)
``` |
zhifei/autotrain-chinese-title-summarization-8-1101140174 | acce54c12a959d7100b28d7c4aa02f3655eba931 | 2022-07-07T10:21:29.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"unk",
"dataset:zhifei/autotrain-data-chinese-title-summarization-8",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | zhifei | null | zhifei/autotrain-chinese-title-summarization-8-1101140174 | 47 | null | transformers | 6,151 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- zhifei/autotrain-data-chinese-title-summarization-8
co2_eq_emissions: 1.4118255120710663
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1101140174
- CO2 Emissions (in grams): 1.4118255120710663
## Validation Metrics
- Loss: 0.0049639358185231686
- Rouge1: 49.3333
- Rouge2: 26.6667
- RougeL: 49.3333
- RougeLsum: 49.3333
- Gen Len: 15.12
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zhifei/autotrain-chinese-title-summarization-8-1101140174
``` |
inywer/dumbbot | e54154a54bc2a9d7826e95a0b42aa8a8b97d4819 | 2022-07-11T09:27:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | inywer | null | inywer/dumbbot | 47 | null | transformers | 6,152 | ---
tags:
- conversational
---
# inywer/dumbbot Model |
ai4bharat/IndicBERTv2-alpha | 5c111d6abef5020f3679cd7134a732459597d887 | 2022-07-27T11:21:29.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ai4bharat | null | ai4bharat/IndicBERTv2-alpha | 47 | null | transformers | 6,153 | IndicBERTv2-alpha
|
Doogie/Waynehills_NLP_muti | 497fef056a730aa629d80f80ec8d1c0327ab3cde | 2022-02-07T00:39:21.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Doogie | null | Doogie/Waynehills_NLP_muti | 46 | null | transformers | 6,154 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: Waynehills_NLP_muti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Waynehills_NLP_muti
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Helsinki-NLP/opus-mt-bg-de | 617ffe639eeeb8c7ee69e0458f43da715b64ac8a | 2021-01-18T07:50:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bg",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bg-de | 46 | null | transformers | 6,155 | ---
language:
- bg
- de
tags:
- translation
license: apache-2.0
---
### bul-deu
* source group: Bulgarian
* target group: German
* OPUS readme: [bul-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md)
* model: transformer
* source language(s): bul
* target language(s): deu
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.deu | 49.3 | 0.676 |
### System Info:
- hf_name: bul-deu
- source_languages: bul
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'de']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: deu
- short_pair: bg-de
- chrF2_score: 0.6759999999999999
- bleu: 49.3
- brevity_penalty: 1.0
- ref_len: 2218.0
- src_name: Bulgarian
- tgt_name: German
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: de
- prefer_old: False
- long_pair: bul-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gem-gem | 92e26534b91b8c1c508e6a556339f252b8551f2b | 2021-01-18T08:52:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"en",
"lb",
"yi",
"gem",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gem-gem | 46 | null | transformers | 6,156 | ---
language:
- da
- sv
- af
- nn
- fy
- fo
- de
- nb
- nl
- is
- en
- lb
- yi
- gem
tags:
- translation
license: apache-2.0
---
### gem-gem
* source group: Germanic languages
* target group: Germanic languages
* OPUS readme: [gem-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-gem/README.md)
* model: transformer
* source language(s): afr ang_Latn dan deu eng enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* target language(s): afr ang_Latn dan deu eng enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 24.5 | 0.519 |
| newssyscomb2009-engdeu.eng.deu | 18.7 | 0.495 |
| news-test2008-deueng.deu.eng | 22.8 | 0.509 |
| news-test2008-engdeu.eng.deu | 18.6 | 0.485 |
| newstest2009-deueng.deu.eng | 22.2 | 0.507 |
| newstest2009-engdeu.eng.deu | 18.3 | 0.491 |
| newstest2010-deueng.deu.eng | 24.8 | 0.537 |
| newstest2010-engdeu.eng.deu | 19.7 | 0.499 |
| newstest2011-deueng.deu.eng | 22.9 | 0.516 |
| newstest2011-engdeu.eng.deu | 18.3 | 0.485 |
| newstest2012-deueng.deu.eng | 23.9 | 0.524 |
| newstest2012-engdeu.eng.deu | 18.5 | 0.484 |
| newstest2013-deueng.deu.eng | 26.3 | 0.537 |
| newstest2013-engdeu.eng.deu | 21.5 | 0.506 |
| newstest2014-deen-deueng.deu.eng | 25.7 | 0.535 |
| newstest2015-ende-deueng.deu.eng | 27.3 | 0.542 |
| newstest2015-ende-engdeu.eng.deu | 24.2 | 0.534 |
| newstest2016-ende-deueng.deu.eng | 31.8 | 0.584 |
| newstest2016-ende-engdeu.eng.deu | 28.4 | 0.564 |
| newstest2017-ende-deueng.deu.eng | 27.6 | 0.545 |
| newstest2017-ende-engdeu.eng.deu | 22.8 | 0.527 |
| newstest2018-ende-deueng.deu.eng | 34.1 | 0.593 |
| newstest2018-ende-engdeu.eng.deu | 32.7 | 0.595 |
| newstest2019-deen-deueng.deu.eng | 30.6 | 0.565 |
| newstest2019-ende-engdeu.eng.deu | 29.5 | 0.567 |
| Tatoeba-test.afr-ang.afr.ang | 0.0 | 0.053 |
| Tatoeba-test.afr-dan.afr.dan | 57.8 | 0.907 |
| Tatoeba-test.afr-deu.afr.deu | 46.4 | 0.663 |
| Tatoeba-test.afr-eng.afr.eng | 57.4 | 0.717 |
| Tatoeba-test.afr-enm.afr.enm | 11.3 | 0.285 |
| Tatoeba-test.afr-fry.afr.fry | 0.0 | 0.167 |
| Tatoeba-test.afr-gos.afr.gos | 1.5 | 0.178 |
| Tatoeba-test.afr-isl.afr.isl | 29.0 | 0.760 |
| Tatoeba-test.afr-ltz.afr.ltz | 11.2 | 0.246 |
| Tatoeba-test.afr-nld.afr.nld | 53.3 | 0.708 |
| Tatoeba-test.afr-nor.afr.nor | 66.0 | 0.752 |
| Tatoeba-test.afr-swe.afr.swe | 88.0 | 0.955 |
| Tatoeba-test.afr-yid.afr.yid | 59.5 | 0.443 |
| Tatoeba-test.ang-afr.ang.afr | 10.7 | 0.043 |
| Tatoeba-test.ang-dan.ang.dan | 6.3 | 0.190 |
| Tatoeba-test.ang-deu.ang.deu | 1.4 | 0.212 |
| Tatoeba-test.ang-eng.ang.eng | 8.1 | 0.247 |
| Tatoeba-test.ang-enm.ang.enm | 1.7 | 0.196 |
| Tatoeba-test.ang-fao.ang.fao | 10.7 | 0.105 |
| Tatoeba-test.ang-gos.ang.gos | 10.7 | 0.128 |
| Tatoeba-test.ang-isl.ang.isl | 16.0 | 0.135 |
| Tatoeba-test.ang-ltz.ang.ltz | 16.0 | 0.121 |
| Tatoeba-test.ang-yid.ang.yid | 1.5 | 0.136 |
| Tatoeba-test.dan-afr.dan.afr | 22.7 | 0.655 |
| Tatoeba-test.dan-ang.dan.ang | 3.1 | 0.110 |
| Tatoeba-test.dan-deu.dan.deu | 47.4 | 0.676 |
| Tatoeba-test.dan-eng.dan.eng | 54.7 | 0.704 |
| Tatoeba-test.dan-enm.dan.enm | 4.8 | 0.291 |
| Tatoeba-test.dan-fao.dan.fao | 9.7 | 0.120 |
| Tatoeba-test.dan-gos.dan.gos | 3.8 | 0.240 |
| Tatoeba-test.dan-isl.dan.isl | 66.1 | 0.678 |
| Tatoeba-test.dan-ltz.dan.ltz | 78.3 | 0.563 |
| Tatoeba-test.dan-nds.dan.nds | 6.2 | 0.335 |
| Tatoeba-test.dan-nld.dan.nld | 60.0 | 0.748 |
| Tatoeba-test.dan-nor.dan.nor | 68.1 | 0.812 |
| Tatoeba-test.dan-swe.dan.swe | 65.0 | 0.785 |
| Tatoeba-test.dan-swg.dan.swg | 2.6 | 0.182 |
| Tatoeba-test.dan-yid.dan.yid | 9.3 | 0.226 |
| Tatoeba-test.deu-afr.deu.afr | 50.3 | 0.682 |
| Tatoeba-test.deu-ang.deu.ang | 0.5 | 0.118 |
| Tatoeba-test.deu-dan.deu.dan | 49.6 | 0.679 |
| Tatoeba-test.deu-eng.deu.eng | 43.4 | 0.618 |
| Tatoeba-test.deu-enm.deu.enm | 2.2 | 0.159 |
| Tatoeba-test.deu-frr.deu.frr | 0.4 | 0.156 |
| Tatoeba-test.deu-fry.deu.fry | 10.7 | 0.355 |
| Tatoeba-test.deu-gos.deu.gos | 0.7 | 0.183 |
| Tatoeba-test.deu-got.deu.got | 0.3 | 0.010 |
| Tatoeba-test.deu-gsw.deu.gsw | 1.1 | 0.130 |
| Tatoeba-test.deu-isl.deu.isl | 24.3 | 0.504 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.9 | 0.173 |
| Tatoeba-test.deu-ltz.deu.ltz | 15.6 | 0.304 |
| Tatoeba-test.deu-nds.deu.nds | 21.2 | 0.469 |
| Tatoeba-test.deu-nld.deu.nld | 47.1 | 0.657 |
| Tatoeba-test.deu-nor.deu.nor | 43.9 | 0.646 |
| Tatoeba-test.deu-pdc.deu.pdc | 3.0 | 0.133 |
| Tatoeba-test.deu-sco.deu.sco | 12.0 | 0.296 |
| Tatoeba-test.deu-stq.deu.stq | 0.6 | 0.137 |
| Tatoeba-test.deu-swe.deu.swe | 50.6 | 0.668 |
| Tatoeba-test.deu-swg.deu.swg | 0.2 | 0.137 |
| Tatoeba-test.deu-yid.deu.yid | 3.9 | 0.229 |
| Tatoeba-test.eng-afr.eng.afr | 55.2 | 0.721 |
| Tatoeba-test.eng-ang.eng.ang | 4.9 | 0.118 |
| Tatoeba-test.eng-dan.eng.dan | 52.6 | 0.684 |
| Tatoeba-test.eng-deu.eng.deu | 35.4 | 0.573 |
| Tatoeba-test.eng-enm.eng.enm | 1.8 | 0.223 |
| Tatoeba-test.eng-fao.eng.fao | 7.0 | 0.312 |
| Tatoeba-test.eng-frr.eng.frr | 1.2 | 0.050 |
| Tatoeba-test.eng-fry.eng.fry | 15.8 | 0.381 |
| Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.170 |
| Tatoeba-test.eng-got.eng.got | 0.3 | 0.011 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.5 | 0.126 |
| Tatoeba-test.eng-isl.eng.isl | 20.9 | 0.463 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.0 | 0.141 |
| Tatoeba-test.eng-ltz.eng.ltz | 12.8 | 0.292 |
| Tatoeba-test.eng-nds.eng.nds | 18.3 | 0.428 |
| Tatoeba-test.eng-nld.eng.nld | 47.3 | 0.657 |
| Tatoeba-test.eng-non.eng.non | 0.3 | 0.145 |
| Tatoeba-test.eng-nor.eng.nor | 47.2 | 0.650 |
| Tatoeba-test.eng-pdc.eng.pdc | 4.8 | 0.177 |
| Tatoeba-test.eng-sco.eng.sco | 38.1 | 0.597 |
| Tatoeba-test.eng-stq.eng.stq | 2.4 | 0.288 |
| Tatoeba-test.eng-swe.eng.swe | 52.7 | 0.677 |
| Tatoeba-test.eng-swg.eng.swg | 1.1 | 0.163 |
| Tatoeba-test.eng-yid.eng.yid | 4.5 | 0.223 |
| Tatoeba-test.enm-afr.enm.afr | 22.8 | 0.401 |
| Tatoeba-test.enm-ang.enm.ang | 0.4 | 0.062 |
| Tatoeba-test.enm-dan.enm.dan | 51.4 | 0.782 |
| Tatoeba-test.enm-deu.enm.deu | 33.8 | 0.473 |
| Tatoeba-test.enm-eng.enm.eng | 22.4 | 0.495 |
| Tatoeba-test.enm-fry.enm.fry | 16.0 | 0.173 |
| Tatoeba-test.enm-gos.enm.gos | 6.1 | 0.222 |
| Tatoeba-test.enm-isl.enm.isl | 59.5 | 0.651 |
| Tatoeba-test.enm-ksh.enm.ksh | 10.5 | 0.130 |
| Tatoeba-test.enm-nds.enm.nds | 18.1 | 0.327 |
| Tatoeba-test.enm-nld.enm.nld | 38.3 | 0.546 |
| Tatoeba-test.enm-nor.enm.nor | 15.6 | 0.290 |
| Tatoeba-test.enm-yid.enm.yid | 2.3 | 0.215 |
| Tatoeba-test.fao-ang.fao.ang | 2.1 | 0.035 |
| Tatoeba-test.fao-dan.fao.dan | 53.7 | 0.625 |
| Tatoeba-test.fao-eng.fao.eng | 24.7 | 0.435 |
| Tatoeba-test.fao-gos.fao.gos | 12.7 | 0.116 |
| Tatoeba-test.fao-isl.fao.isl | 26.3 | 0.341 |
| Tatoeba-test.fao-nor.fao.nor | 41.9 | 0.586 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 1.000 |
| Tatoeba-test.frr-deu.frr.deu | 7.4 | 0.263 |
| Tatoeba-test.frr-eng.frr.eng | 7.0 | 0.157 |
| Tatoeba-test.frr-fry.frr.fry | 4.0 | 0.112 |
| Tatoeba-test.frr-gos.frr.gos | 1.0 | 0.135 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.207 |
| Tatoeba-test.frr-nld.frr.nld | 10.6 | 0.227 |
| Tatoeba-test.frr-stq.frr.stq | 1.0 | 0.058 |
| Tatoeba-test.fry-afr.fry.afr | 12.7 | 0.333 |
| Tatoeba-test.fry-deu.fry.deu | 30.8 | 0.555 |
| Tatoeba-test.fry-eng.fry.eng | 31.2 | 0.506 |
| Tatoeba-test.fry-enm.fry.enm | 0.0 | 0.175 |
| Tatoeba-test.fry-frr.fry.frr | 1.6 | 0.091 |
| Tatoeba-test.fry-gos.fry.gos | 1.1 | 0.254 |
| Tatoeba-test.fry-ltz.fry.ltz | 30.4 | 0.526 |
| Tatoeba-test.fry-nds.fry.nds | 12.4 | 0.116 |
| Tatoeba-test.fry-nld.fry.nld | 43.4 | 0.637 |
| Tatoeba-test.fry-nor.fry.nor | 47.1 | 0.607 |
| Tatoeba-test.fry-stq.fry.stq | 0.6 | 0.181 |
| Tatoeba-test.fry-swe.fry.swe | 30.2 | 0.587 |
| Tatoeba-test.fry-yid.fry.yid | 3.1 | 0.173 |
| Tatoeba-test.gos-afr.gos.afr | 1.8 | 0.215 |
| Tatoeba-test.gos-ang.gos.ang | 0.0 | 0.045 |
| Tatoeba-test.gos-dan.gos.dan | 4.1 | 0.236 |
| Tatoeba-test.gos-deu.gos.deu | 19.6 | 0.406 |
| Tatoeba-test.gos-eng.gos.eng | 15.1 | 0.329 |
| Tatoeba-test.gos-enm.gos.enm | 5.8 | 0.271 |
| Tatoeba-test.gos-fao.gos.fao | 19.0 | 0.136 |
| Tatoeba-test.gos-frr.gos.frr | 1.3 | 0.119 |
| Tatoeba-test.gos-fry.gos.fry | 17.1 | 0.388 |
| Tatoeba-test.gos-isl.gos.isl | 16.8 | 0.356 |
| Tatoeba-test.gos-ltz.gos.ltz | 3.6 | 0.174 |
| Tatoeba-test.gos-nds.gos.nds | 4.7 | 0.225 |
| Tatoeba-test.gos-nld.gos.nld | 16.3 | 0.406 |
| Tatoeba-test.gos-stq.gos.stq | 0.7 | 0.154 |
| Tatoeba-test.gos-swe.gos.swe | 8.6 | 0.319 |
| Tatoeba-test.gos-yid.gos.yid | 4.4 | 0.165 |
| Tatoeba-test.got-deu.got.deu | 0.2 | 0.041 |
| Tatoeba-test.got-eng.got.eng | 0.2 | 0.068 |
| Tatoeba-test.got-nor.got.nor | 0.6 | 0.000 |
| Tatoeba-test.gsw-deu.gsw.deu | 15.9 | 0.373 |
| Tatoeba-test.gsw-eng.gsw.eng | 14.7 | 0.320 |
| Tatoeba-test.isl-afr.isl.afr | 38.0 | 0.641 |
| Tatoeba-test.isl-ang.isl.ang | 0.0 | 0.037 |
| Tatoeba-test.isl-dan.isl.dan | 67.7 | 0.836 |
| Tatoeba-test.isl-deu.isl.deu | 42.6 | 0.614 |
| Tatoeba-test.isl-eng.isl.eng | 43.5 | 0.610 |
| Tatoeba-test.isl-enm.isl.enm | 12.4 | 0.123 |
| Tatoeba-test.isl-fao.isl.fao | 15.6 | 0.176 |
| Tatoeba-test.isl-gos.isl.gos | 7.1 | 0.257 |
| Tatoeba-test.isl-nor.isl.nor | 53.5 | 0.690 |
| Tatoeba-test.isl-stq.isl.stq | 10.7 | 0.176 |
| Tatoeba-test.isl-swe.isl.swe | 67.7 | 0.818 |
| Tatoeba-test.ksh-deu.ksh.deu | 11.8 | 0.393 |
| Tatoeba-test.ksh-eng.ksh.eng | 4.0 | 0.239 |
| Tatoeba-test.ksh-enm.ksh.enm | 9.5 | 0.085 |
| Tatoeba-test.ltz-afr.ltz.afr | 36.5 | 0.529 |
| Tatoeba-test.ltz-ang.ltz.ang | 0.0 | 0.043 |
| Tatoeba-test.ltz-dan.ltz.dan | 80.6 | 0.722 |
| Tatoeba-test.ltz-deu.ltz.deu | 40.1 | 0.581 |
| Tatoeba-test.ltz-eng.ltz.eng | 36.1 | 0.511 |
| Tatoeba-test.ltz-fry.ltz.fry | 16.5 | 0.524 |
| Tatoeba-test.ltz-gos.ltz.gos | 0.7 | 0.118 |
| Tatoeba-test.ltz-nld.ltz.nld | 40.4 | 0.535 |
| Tatoeba-test.ltz-nor.ltz.nor | 19.1 | 0.582 |
| Tatoeba-test.ltz-stq.ltz.stq | 2.4 | 0.093 |
| Tatoeba-test.ltz-swe.ltz.swe | 25.9 | 0.430 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.5 | 0.160 |
| Tatoeba-test.multi.multi | 42.7 | 0.614 |
| Tatoeba-test.nds-dan.nds.dan | 23.0 | 0.465 |
| Tatoeba-test.nds-deu.nds.deu | 39.8 | 0.610 |
| Tatoeba-test.nds-eng.nds.eng | 32.0 | 0.520 |
| Tatoeba-test.nds-enm.nds.enm | 3.9 | 0.156 |
| Tatoeba-test.nds-frr.nds.frr | 10.7 | 0.127 |
| Tatoeba-test.nds-fry.nds.fry | 10.7 | 0.231 |
| Tatoeba-test.nds-gos.nds.gos | 0.8 | 0.157 |
| Tatoeba-test.nds-nld.nds.nld | 44.1 | 0.634 |
| Tatoeba-test.nds-nor.nds.nor | 47.1 | 0.665 |
| Tatoeba-test.nds-swg.nds.swg | 0.5 | 0.166 |
| Tatoeba-test.nds-yid.nds.yid | 12.7 | 0.337 |
| Tatoeba-test.nld-afr.nld.afr | 58.4 | 0.748 |
| Tatoeba-test.nld-dan.nld.dan | 61.3 | 0.753 |
| Tatoeba-test.nld-deu.nld.deu | 48.2 | 0.670 |
| Tatoeba-test.nld-eng.nld.eng | 52.8 | 0.690 |
| Tatoeba-test.nld-enm.nld.enm | 5.7 | 0.178 |
| Tatoeba-test.nld-frr.nld.frr | 0.9 | 0.159 |
| Tatoeba-test.nld-fry.nld.fry | 23.0 | 0.467 |
| Tatoeba-test.nld-gos.nld.gos | 1.0 | 0.165 |
| Tatoeba-test.nld-ltz.nld.ltz | 14.4 | 0.310 |
| Tatoeba-test.nld-nds.nld.nds | 24.1 | 0.485 |
| Tatoeba-test.nld-nor.nld.nor | 53.6 | 0.705 |
| Tatoeba-test.nld-sco.nld.sco | 15.0 | 0.415 |
| Tatoeba-test.nld-stq.nld.stq | 0.5 | 0.183 |
| Tatoeba-test.nld-swe.nld.swe | 73.6 | 0.842 |
| Tatoeba-test.nld-swg.nld.swg | 4.2 | 0.191 |
| Tatoeba-test.nld-yid.nld.yid | 9.4 | 0.299 |
| Tatoeba-test.non-eng.non.eng | 27.7 | 0.501 |
| Tatoeba-test.nor-afr.nor.afr | 48.2 | 0.687 |
| Tatoeba-test.nor-dan.nor.dan | 69.5 | 0.820 |
| Tatoeba-test.nor-deu.nor.deu | 41.1 | 0.634 |
| Tatoeba-test.nor-eng.nor.eng | 49.4 | 0.660 |
| Tatoeba-test.nor-enm.nor.enm | 6.8 | 0.230 |
| Tatoeba-test.nor-fao.nor.fao | 6.9 | 0.395 |
| Tatoeba-test.nor-fry.nor.fry | 9.2 | 0.323 |
| Tatoeba-test.nor-got.nor.got | 1.5 | 0.000 |
| Tatoeba-test.nor-isl.nor.isl | 34.5 | 0.555 |
| Tatoeba-test.nor-ltz.nor.ltz | 22.1 | 0.447 |
| Tatoeba-test.nor-nds.nor.nds | 34.3 | 0.565 |
| Tatoeba-test.nor-nld.nor.nld | 50.5 | 0.676 |
| Tatoeba-test.nor-nor.nor.nor | 57.6 | 0.764 |
| Tatoeba-test.nor-swe.nor.swe | 68.9 | 0.813 |
| Tatoeba-test.nor-yid.nor.yid | 65.0 | 0.627 |
| Tatoeba-test.pdc-deu.pdc.deu | 43.5 | 0.559 |
| Tatoeba-test.pdc-eng.pdc.eng | 26.1 | 0.471 |
| Tatoeba-test.sco-deu.sco.deu | 7.1 | 0.295 |
| Tatoeba-test.sco-eng.sco.eng | 34.4 | 0.551 |
| Tatoeba-test.sco-nld.sco.nld | 9.9 | 0.438 |
| Tatoeba-test.stq-deu.stq.deu | 8.6 | 0.385 |
| Tatoeba-test.stq-eng.stq.eng | 21.8 | 0.431 |
| Tatoeba-test.stq-frr.stq.frr | 2.1 | 0.111 |
| Tatoeba-test.stq-fry.stq.fry | 7.6 | 0.267 |
| Tatoeba-test.stq-gos.stq.gos | 0.7 | 0.198 |
| Tatoeba-test.stq-isl.stq.isl | 16.0 | 0.121 |
| Tatoeba-test.stq-ltz.stq.ltz | 3.8 | 0.150 |
| Tatoeba-test.stq-nld.stq.nld | 14.6 | 0.375 |
| Tatoeba-test.stq-yid.stq.yid | 2.4 | 0.096 |
| Tatoeba-test.swe-afr.swe.afr | 51.8 | 0.802 |
| Tatoeba-test.swe-dan.swe.dan | 64.9 | 0.784 |
| Tatoeba-test.swe-deu.swe.deu | 47.0 | 0.657 |
| Tatoeba-test.swe-eng.swe.eng | 55.8 | 0.700 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.060 |
| Tatoeba-test.swe-fry.swe.fry | 14.1 | 0.449 |
| Tatoeba-test.swe-gos.swe.gos | 7.5 | 0.291 |
| Tatoeba-test.swe-isl.swe.isl | 70.7 | 0.812 |
| Tatoeba-test.swe-ltz.swe.ltz | 15.9 | 0.553 |
| Tatoeba-test.swe-nld.swe.nld | 78.7 | 0.854 |
| Tatoeba-test.swe-nor.swe.nor | 67.1 | 0.799 |
| Tatoeba-test.swe-yid.swe.yid | 14.7 | 0.156 |
| Tatoeba-test.swg-dan.swg.dan | 7.7 | 0.341 |
| Tatoeba-test.swg-deu.swg.deu | 8.0 | 0.334 |
| Tatoeba-test.swg-eng.swg.eng | 12.4 | 0.305 |
| Tatoeba-test.swg-nds.swg.nds | 1.1 | 0.209 |
| Tatoeba-test.swg-nld.swg.nld | 4.9 | 0.244 |
| Tatoeba-test.swg-yid.swg.yid | 3.4 | 0.194 |
| Tatoeba-test.yid-afr.yid.afr | 23.6 | 0.552 |
| Tatoeba-test.yid-ang.yid.ang | 0.1 | 0.066 |
| Tatoeba-test.yid-dan.yid.dan | 17.5 | 0.392 |
| Tatoeba-test.yid-deu.yid.deu | 21.0 | 0.423 |
| Tatoeba-test.yid-eng.yid.eng | 17.4 | 0.368 |
| Tatoeba-test.yid-enm.yid.enm | 0.6 | 0.143 |
| Tatoeba-test.yid-fry.yid.fry | 5.3 | 0.169 |
| Tatoeba-test.yid-gos.yid.gos | 1.2 | 0.149 |
| Tatoeba-test.yid-ltz.yid.ltz | 3.5 | 0.256 |
| Tatoeba-test.yid-nds.yid.nds | 14.4 | 0.487 |
| Tatoeba-test.yid-nld.yid.nld | 26.1 | 0.423 |
| Tatoeba-test.yid-nor.yid.nor | 47.1 | 0.583 |
| Tatoeba-test.yid-stq.yid.stq | 1.5 | 0.092 |
| Tatoeba-test.yid-swe.yid.swe | 35.9 | 0.518 |
| Tatoeba-test.yid-swg.yid.swg | 1.0 | 0.124 |
### System Info:
- hf_name: gem-gem
- source_languages: gem
- target_languages: gem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-gem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
- src_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.test.txt
- src_alpha3: gem
- tgt_alpha3: gem
- short_pair: gem-gem
- chrF2_score: 0.614
- bleu: 42.7
- brevity_penalty: 0.993
- ref_len: 73459.0
- src_name: Germanic languages
- tgt_name: Germanic languages
- train_date: 2020-07-27
- src_alpha2: gem
- tgt_alpha2: gem
- prefer_old: False
- long_pair: gem-gem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
KoichiYasuoka/bert-large-japanese-luw-upos | 21acbf6d8f4b4c804b1d6b434754447f6bbee113 | 2022-06-27T01:39:54.000Z | [
"pytorch",
"bert",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-large-japanese-luw-upos | 46 | null | transformers | 6,157 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# bert-large-japanese-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(s,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
NbAiLab/notram-bert-norwegian-cased-080321 | 4a4de1c93d8243866a8c3dd085e05b971b127af1 | 2022-02-06T18:15:16.000Z | [
"pytorch",
"tf",
"bert",
"no",
"transformers",
"norwegian",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | false | NbAiLab | null | NbAiLab/notram-bert-norwegian-cased-080321 | 46 | null | transformers | 6,158 | ---
language: no
license: cc-by-4.0
tags:
- norwegian
- bert
pipeline_tag: fill-mask
widget:
- text: På biblioteket kan du [MASK] en bok.
- text: Dette er et [MASK] eksempel.
- text: Av og til kan en språkmodell gi et [MASK] resultat.
- text: Som ansat får du [MASK] for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling.
---
## Results
|**Model** | **NoRec** | **NorNe-NB**| **NorNe-NN** | **NorDial** | **DaNe** | **Da-Angry-Tweets** |
|:-----------|------------:|------------:|------------:|------------:|------------:|------------:|
|roberta-base (English) | 51.77 | 79.01/79.53| 79.79/83.02 | 67.18| 75.44/78.07 | 55.51 |
|mBERT-cased | 63.91 | 83.72/86.12| 83.05/87.12 | 66.23| 80.00/81.43 | 57.67 |
|nb-bert-base | 75.60 |**91.98**/**92.95** |**90.93**/**94.06**|69.39| 81.95/84.83| 64.18|
|notram-bert-norwegian-cased | 72.47 | 91.77/93.12|89.79/93.70| **78.55**| **83.69**/**86.55**| **64.19** |
|notram-bert-norwegian-uncased | 73.47 | 89.28/91.61 |87.23/90.23 |74.21 | 80.29/82.31| 61.18|
|notram-bert-norwegian-cased-pod | **76.18** | 91.24/92.24| 90.88/93.21| 76.21| 81.82/84.99| 62.16 |
|nb-roberta-base | 68.77 |87.99/89.43 | 85.43/88.66| 76.34| 75.91/77.94| 61.50 |
|nb-roberta-base-scandinavian | 67.88 | 87.73/89.14| 87.39/90.92| 74.81| 76.22/78.66 | 63.37 |
|nb-roberta-base-v2-200k | 46.87 | 85.57/87.04| - | 64.99| - | - |
|test_long_w5 200k| 60.48 | 88.00/90:00 | 83.93/88.45 | 68.41 |75.22/78.50| 57.95 |
|test_long_w5_roberta_tokenizer 200k| 63.51| 86.28/87.77| 84.95/88.61 | 69.86 | 71.31/74.27 | 59.96 |
|test_long_w5_roberta_tokenizer 400k| 59.76 |87.39/89.06 | 85.16/89.01 | 71.46 | 72.39/75.65| 39.73 |
|test_long_w5_dataset 400k| 66.80 | 86.52/88.55 | 82.81/86.76 | 66.94 | 71.47/74.20| 55.25 |
|test_long_w5_dataset 600k| 67.37 | 89.98/90.95 | 84.53/88.37 | 66.84 | 75.14/76.50| 57.47 |
|roberta-jan-128_ncc - 400k - 128| 67.79 | 91.45/92.33 | 86.41/90.19 | 67.20 | 81.00/82.39| 59.65 |
|roberta-jan-128_ncc - 1000k - 128| 68.17 | 89.34/90.74 | 86.89/89.87 | 68.41 | 80.30/82.17| 61.63 | |
NikolajMunch/danish-emotion-classification | 17c97ced5466a7f83df22f473662b914d1a00f39 | 2022-01-04T12:14:46.000Z | [
"pytorch",
"bert",
"text-classification",
"da",
"transformers",
"sentiment",
"emotion",
"danish"
] | text-classification | false | NikolajMunch | null | NikolajMunch/danish-emotion-classification | 46 | 1 | transformers | 6,159 | ---
widget:
- text: "Hold da op! Kan det virkelig passe?"
language:
- "da"
tags:
- sentiment
- emotion
- danish
---
# **-- EMODa --**
## BERT-model for danish multi-class classification of emotions
Classifies a danish sentence into one of 6 different emotions:
| Danish emotion | Ekman's emotion |
| ----- | ----- |
| 😞 **Afsky** | Disgust |
| 😨 **Frygt** | Fear |
| 😄 **Glæde** | Joy |
| 😱 **Overraskelse** | Surprise |
| 😢 **Tristhed** | Sadness |
| 😠 **Vrede** | Anger |
# How to use
```python
from transformers import pipeline
model_path = "NikolajMunch/danish-emotion-classification"
classifier = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
prediction = classifier("Jeg er godt nok ked af at mine SMS'er er slettet")
print(prediction)
# [{'label': 'Tristhed', 'score': 0.9725030660629272}]
```
or
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("NikolajMunch/danish-emotion-classification")
model = AutoModelForSequenceClassification.from_pretrained("NikolajMunch/danish-emotion-classification")
```
|
Norod78/distilgpt2-base-pretrained-he | f59d54877b3e45f2fbc603fbad2d83bcc92293ee | 2021-07-26T06:41:24.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"he",
"transformers",
"license:mit"
] | text-generation | false | Norod78 | null | Norod78/distilgpt2-base-pretrained-he | 46 | null | transformers | 6,160 | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "עוד בימי קדם"
- text: "קוראים לי דורון ואני מעוניין ל"
- text: "קוראים לי איציק ואני חושב ש"
- text: "החתול שלך מאוד חמוד ו"
license: mit
---
# hebrew-distilgpt2
A tiny GPT2 based Hebrew text generation model trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
## Dataset
oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
## Training
* Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR>
* I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351)
## Usage
#### Simple usage sample code
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
#pip install tokenizers==0.10.3 transformers==4.8.0
tokenizer = AutoTokenizer.from_pretrained("Norod78/distilgpt2-base-pretrained-he")
model = AutoModelForCausalLM.from_pretrained("Norod78/distilgpt2-base-pretrained-he", pad_token_id=tokenizer.eos_token_id)
prompt_text = "הנבחרת האולימפית של ישראל זכתה השנה"
max_len = 50
sample_output_num = 3
seed = 1000
import numpy as np
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count()
print(f"device: {device}, n_gpu: {n_gpu}")
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(seed)
model.to(device)
encoded_prompt = tokenizer.encode(
prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to(device)
if encoded_prompt.size()[-1] == 0:
input_ids = None
else:
input_ids = encoded_prompt
print("input_ids = " + str(input_ids))
if input_ids != None:
max_len += len(encoded_prompt[0])
if max_len > 1024:
max_len = 1024
print("Updated max_len = " + str(max_len))
stop_token = "<|endoftext|>"
new_lines = "\n\n\n"
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=max_len,
top_k=50,
top_p=0.95,
num_return_sequences=sample_output_num
)
print(100 * '-' + "\n\t\tOutput\n" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
text = tokenizer.decode(sample_output, skip_special_tokens=True)
# Remove all text after the stop token
text = text[: text.find(stop_token) if stop_token else None]
# Remove all text after 3 newlines
text = text[: text.find(new_lines) if new_lines else None]
print("\n{}: {}".format(i, text))
print("\n" + 100 * '-')
```
|
Norod78/hewiki-articles-distilGPT2py-il | 88e1aa24acdec518cec7d1573e863b23d9c5393f | 2022-07-04T07:25:03.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"he",
"transformers",
"license:mit"
] | text-generation | false | Norod78 | null | Norod78/hewiki-articles-distilGPT2py-il | 46 | null | transformers | 6,161 | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "<|startoftext|>החוק השני של מועדון קרב הוא"
- text: "<|startoftext|>ראש הממשלה בן גוריון"
- text: "<|startoftext|>למידת מכונה (סרט)"
- text: "<|startoftext|>מנשה פומפרניקל"
- text: "<|startoftext|>אי שוויון "
license: mit
---
# hewiki-articles-distilGPT2py-il
## A tiny GPT2 model for generating Hebrew text
A distilGPT2 sized model. <br>
Training data was hewiki-20200701-pages-articles-multistream.xml.bz2 from https://dumps.wikimedia.org/hewiki/20200701/ <br>
XML has been converted to plain text using Wikipedia Extractor http://medialab.di.unipi.it/wiki/Wikipedia_Extractor <br>
I then added <|startoftext|> and <|endoftext|> markers and deleted empty lines. <br>
#### How to use
```python
import torch
import torch.nn as nn
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("Norod78/hewiki-articles-distilGPT2py-il")
model = GPT2LMHeadModel.from_pretrained("Norod78/hewiki-articles-distilGPT2py-il").eval()
bos_token = tokenizer.bos_token #Beginning of sentace
eos_token = tokenizer.eos_token #End of sentence
def generate_word(model, tokens_tensor, temperature=1.0):
"""
Sample a word given a tensor of tokens of previous words from a model. Given
the words we have, sample a plausible word. Temperature is used for
controlling randomness. If using temperature==0 we simply use a greedy arg max.
Else, we sample from a multinomial distribution using a lower inverse
temperature to allow for more randomness to escape repetitions.
"""
with torch.no_grad():
outputs = model(tokens_tensor)
predictions = outputs[0]
if temperature>0:
# Make the distribution more or less skewed based on the temperature
predictions = outputs[0]/temperature
# Sample from the distribution
softmax = nn.Softmax(dim=0)
predicted_index = torch.multinomial(softmax(predictions[0,-1,:]),1).item()
# Simply take the arg-max of the distribution
else:
predicted_index = torch.argmax(predictions[0, -1, :]).item()
# Decode the encoding to the corresponding word
predicted_text = tokenizer.decode([predicted_index])
return predicted_text
def generate_sentence(model, tokenizer, initial_text, temperature=1.0):
""" Generate a sentence given some initial text using a model and a tokenizer.
Returns the new sentence. """
# Encode a text inputs
text = ""
sentence = text
# We avoid an infinite loop by setting a maximum range
for i in range(0,84):
indexed_tokens = tokenizer.encode(initial_text + text)
# Convert indexed tokens in a PyTorch tensor
tokens_tensor = torch.tensor([indexed_tokens])
new_word = generate_word(model, tokens_tensor, temperature=temperature)
# Here the temperature is slowly decreased with each generated word,
# this ensures that the sentence (ending) makes more sense.
# We don't decrease to a temperature of 0.0 to leave some randomness in.
if temperature<(1-0.008):
temperature += 0.008
else:
temperature = 0.996
text = text+new_word
# Stop generating new words when we have reached the end of the line or the poem
if eos_token in new_word:
# returns new sentence and whether poem is done
return (text.replace(eos_token,"").strip(), True)
elif '/' in new_word:
return (text.strip(), False)
elif bos_token in new_word:
return (text.replace(bos_token,"").strip(), False)
return (text, True)
for output_num in range(1,5):
init_text = "בוקר טוב"
text = bos_token + init_text
for i in range(0,84):
sentence = generate_sentence(model, tokenizer, text, temperature=0.9)
text = init_text + sentence[0]
print(text)
if (sentence[1] == True):
break
```
|
SEBIS/code_trans_t5_small_program_synthese_multitask | 8efc2c60febbe97d36b48bc2adf4833899820028 | 2022-06-02T19:50:32.000Z | [
"pytorch",
"tf",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_program_synthese_multitask | 46 | null | transformers | 6,162 | ---
tags:
- summarization
widget:
- text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
---
# CodeTrans model for program synthesis
Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate lisp inspired DSL code given the human language description tasks.
### How to use
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/program%20synthesis/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 440,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | LISP |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 89.43 |
| CodeTrans-ST-Base | 89.65 |
| CodeTrans-TF-Small | 90.30 |
| CodeTrans-TF-Base | 90.24 |
| CodeTrans-TF-Large | 90.21 |
| CodeTrans-MT-Small | 82.88 |
| CodeTrans-MT-Base | 86.99 |
| CodeTrans-MT-Large | 90.27 |
| CodeTrans-MT-TF-Small | **90.31** |
| CodeTrans-MT-TF-Base | 90.30 |
| CodeTrans-MT-TF-Large | 90.17 |
| State of the art | 85.80 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
ThomasNLG/t5-qa_webnlg_synth-en | 288c00907a5143ba864272c4bc16b8e98559eebd | 2021-07-09T07:45:27.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:squad_v2",
"arxiv:2104.07555",
"transformers",
"qa",
"question",
"answering",
"SQuAD",
"data2text",
"metric",
"nlg",
"t5-small",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ThomasNLG | null | ThomasNLG/t5-qa_webnlg_synth-en | 46 | null | transformers | 6,163 | ---
language: en
tags:
- qa
- question
- answering
- SQuAD
- data2text
- metric
- nlg
- t5-small
license: mit
datasets:
- squad_v2
model-index:
- name: t5-qa_webnlg_synth-en
results:
- task:
name: Data Question Answering
type: extractive-qa
widget:
- text: "What is the food type at The Eagle? </s> name [ The Eagle ] , eatType [ coffee shop ] , food [ French ] , priceRange [ £ 2 0 - 2 5 ]"
---
# t5-qa_webnlg_synth-en
## Model description
This model is a *Data Question Answering* model based on T5-small, that answers questions given a structured table as input.
It is actually a component of [QuestEval](https://github.com/ThomasScialom/QuestEval) metric but can be used independently as it is, for QA only.
## How to use
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ThomasNLG/t5-qa_webnlg_synth-en")
model = T5ForConditionalGeneration.from_pretrained("ThomasNLG/t5-qa_webnlg_synth-en")
```
You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model):
`text_input = "{QUESTION} </s> {CONTEXT}"`
where `CONTEXT` is a structured table that is linearised this way:
`CONTEXT = "name [ The Eagle ] , eatType [ coffee shop ] , food [ French ] , priceRange [ £ 2 0 - 2 5 ]"`
## Training data
The model was trained on synthetic data as described in [Data-QuestEval: A Referenceless Metric for Data to Text Semantic Evaluation](https://arxiv.org/abs/2104.07555).
### Citation info
```bibtex
@article{rebuffel2021data,
title={Data-QuestEval: A Referenceless Metric for Data to Text Semantic Evaluation},
author={Rebuffel, Cl{\'e}ment and Scialom, Thomas and Soulier, Laure and Piwowarski, Benjamin and Lamprier, Sylvain and Staiano, Jacopo and Scoutheeten, Geoffrey and Gallinari, Patrick},
journal={arXiv preprint arXiv:2104.07555},
year={2021}
}
``` |
Yehor/wav2vec2-xls-r-1b-uk-with-news-lm | 1af0c0772402adebe5b373d2ddbc3aab50830c90 | 2022-07-30T07:00:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0"
] | automatic-speech-recognition | false | Yehor | null | Yehor/wav2vec2-xls-r-1b-uk-with-news-lm | 46 | 1 | transformers | 6,164 | ---
language:
- uk
license: cc-by-nc-sa-4.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- uk
xdatasets:
- mozilla-foundation/common_voice_7_0
---
# Ukrainian STT model (with the Big Language Model formed on News Dataset)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
Attribution to the dataset of Language Model:
- Chaplynskyi, D. et al. (2021) lang-uk Ukrainian Ubercorpus [Data set]. https://lang.org.ua/uk/corpora/#anchor4
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 |
| 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 |
| 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 |
| 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 |
| 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 |
| 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 |
| 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 |
| 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 |
| 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 |
| 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 |
| 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 |
| 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
allenai/reviews_roberta_base | d446b77ce4028c442841325488a565ce0c2cbd65 | 2021-05-20T13:36:12.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | allenai | null | allenai/reviews_roberta_base | 46 | null | transformers | 6,165 | Entry not found |
castorini/bpr-nq-question-encoder | bf15a8796a51b290d26552f33b16ab377e5c2d4b | 2021-09-05T00:53:16.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers"
] | feature-extraction | false | castorini | null | castorini/bpr-nq-question-encoder | 46 | null | transformers | 6,166 | This model is converted from the original BPR [repo](https://github.com/studio-ousia/bpr) and fitted into Pyserini:
> Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv:2106.00882. |
danlou/albert-xxlarge-v2-finetuned-csqa | 37eddb04e55c6181d1ab0825bb0078e07f641670 | 2021-07-23T13:55:03.000Z | [
"pytorch",
"albert",
"multiple-choice",
"dataset:commonsense_qa",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | multiple-choice | false | danlou | null | danlou/albert-xxlarge-v2-finetuned-csqa | 46 | 1 | transformers | 6,167 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- commonsense_qa
metrics:
- accuracy
model_index:
- name: albert-xxlarge-v2-finetuned-csqa
results:
- dataset:
name: commonsense_qa
type: commonsense_qa
args: default
metric:
name: Accuracy
type: accuracy
value: 0.7870597839355469
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xxlarge-v2-finetuned-csqa
This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the commonsense_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6177
- Accuracy: 0.7871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7464 | 1.0 | 609 | 0.5319 | 0.7985 |
| 0.3116 | 2.0 | 1218 | 0.6422 | 0.7936 |
| 0.0769 | 3.0 | 1827 | 1.2674 | 0.7952 |
| 0.0163 | 4.0 | 2436 | 1.4839 | 0.7903 |
| 0.0122 | 5.0 | 3045 | 1.6177 | 0.7871 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0
- Datasets 1.10.2
- Tokenizers 0.10.3
|
diarsabri/LaDPR-context-encoder | e1b5d06963aa4d831908c310e71c73970250e168 | 2021-05-05T21:17:44.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers"
] | feature-extraction | false | diarsabri | null | diarsabri/LaDPR-context-encoder | 46 | null | transformers | 6,168 | Language Model 2
For Language agnostic Dense Passage Retrieval |
edwardgowsmith/pt-finegrained-few-shot | 3b0f2bed8b8393b141e84f58d11907b3faa1a3b0 | 2021-09-08T11:53:56.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | edwardgowsmith | null | edwardgowsmith/pt-finegrained-few-shot | 46 | null | transformers | 6,169 | Entry not found |
google/t5-efficient-base-nl32 | 922119e2e1c0bcef38c8cf3b54c730e25b21f874 | 2022-02-15T10:53:27.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-base-nl32 | 46 | 1 | transformers | 6,170 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-BASE-NL32 (Deep-Narrow version)
T5-Efficient-BASE-NL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl32** - is of model type **Base** with the following variations:
- **nl** is **32**
It has **553.36** million parameters and thus requires *ca.* **2213.43 MB** of memory in full precision (*fp32*)
or **1106.71 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
gsarti/it5-large | d97d1e5ea4b0f789c8bd1cfb2e82b7a55852a500 | 2022-03-09T11:56:08.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:gsarti/clean_mc4_it",
"arxiv:2203.03759",
"transformers",
"seq2seq",
"lm-head",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | gsarti | null | gsarti/it5-large | 46 | null | transformers | 6,171 | ---
language:
- it
datasets:
- gsarti/clean_mc4_it
tags:
- seq2seq
- lm-head
license: apache-2.0
inference: false
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# Italian T5 Large 🇮🇹
The [IT5](https://huggingface.co/models?search=it5) model family represents the first effort in pretraining large-scale sequence-to-sequence transformer models for the Italian language, following the approach adopted by the original [T5 model](https://github.com/google-research/text-to-text-transfer-transformer).
This model is released as part of the project ["IT5: Large-Scale Text-to-Text Pretraining for Italian Language Understanding and Generation"](https://arxiv.org/abs/2203.03759) (to be released), by [Gabriele Sarti](https://gsarti.com/) and [Malvina Nissim](https://malvinanissim.github.io/) with the support of [Huggingface](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) and with TPU usage sponsored by Google's [TPU Research Cloud](https://sites.research.google/trc/). All the training was conducted on a single TPU3v8-VM machine on Google Cloud. Refer to the Tensorboard tab of the repository for an overview of the training process.
*The inference widget is deactivated because the model needs a task-specific seq2seq fine-tuning on a downstream task to be useful in practice. The models in the [`it5`](https://huggingface.co/it5) organization provide some examples of this model fine-tuned on various downstream task.*
## Model variants
This repository contains the checkpoints for the `base` version of the model. The model was trained for one epoch (1.05M steps) on the [Thoroughly Cleaned Italian mC4 Corpus](https://huggingface.co/datasets/gsarti/clean_mc4_it) (~41B words, ~275GB) using 🤗 Datasets and the `google/t5-v1_1-large` improved configuration. The training procedure is made available [on Github](https://github.com/gsarti/t5-flax-gcp).
The following table summarizes the parameters for all available models
| |`it5-small` |`it5-base` |`it5-large` (this one) |`it5-base-oscar` |
|-----------------------|-----------------------|----------------------|-----------------------|----------------------------------|
|`dataset` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`oscar/unshuffled_deduplicated_it`|
|`architecture` |`google/t5-v1_1-small` |`google/t5-v1_1-base` |`google/t5-v1_1-large` |`t5-base` |
|`learning rate` | 5e-3 | 5e-3 | 5e-3 | 1e-2 |
|`steps` | 1'050'000 | 1'050'000 | 2'100'000 | 258'000 |
|`training time` | 36 hours | 101 hours | 370 hours | 98 hours |
|`ff projection` |`gated-gelu` |`gated-gelu` |`gated-gelu` |`relu` |
|`tie embeds` |`false` |`false` |`false` |`true` |
|`optimizer` | adafactor | adafactor | adafactor | adafactor |
|`max seq. length` | 512 | 512 | 512 | 512 |
|`per-device batch size`| 16 | 16 | 8 | 16 |
|`tot. batch size` | 128 | 128 | 64 | 128 |
|`weigth decay` | 1e-3 | 1e-3 | 1e-2 | 1e-3 |
|`validation split size`| 15K examples | 15K examples | 15K examples | 15K examples |
The high training time of `it5-base-oscar` was due to [a bug](https://github.com/huggingface/transformers/pull/13012) in the training script.
For a list of individual model parameters, refer to the `config.json` file in the respective repositories.
## Using the models
```python
from transformers import AutoTokenzier, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("gsarti/it5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("gsarti/it5-large")
```
*Note: You will need to fine-tune the model on your downstream seq2seq task to use it. See an example [here](https://huggingface.co/gsarti/it5-base-nli).*
Flax and Tensorflow versions of the model are also available:
```python
from transformers import FlaxT5ForConditionalGeneration, TFT5ForConditionalGeneration
model_flax = FlaxT5ForConditionalGeneration.from_pretrained("gsarti/it5-large")
model_tf = TFT5ForConditionalGeneration.from_pretrained("gsarti/it5-large")
```
## Limitations
Due to the nature of the web-scraped corpus on which IT5 models were trained, it is likely that their usage could reproduce and amplify pre-existing biases in the data, resulting in potentially harmful content such as racial or gender stereotypes and conspiracist views. For this reason, the study of such biases is explicitly encouraged, and model usage should ideally be restricted to research-oriented and non-user-facing endeavors.
## Model curators
For problems or updates on this model, please contact [[email protected]](mailto:[email protected]).
## Citation Information
```bibtex
@article{sarti-nissim-2022-it5,
title={IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
hfl/chinese-electra-large-generator | 4858952a4b13169d8e0754833d546169900ec845 | 2021-03-03T01:40:52.000Z | [
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | false | hfl | null | hfl/chinese-electra-large-generator | 46 | null | transformers | 6,172 | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
**Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.**
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
|
hfl/cino-small-v2 | 86df088ad499faaa108a0fcd8ba4f33674750139 | 2022-02-21T09:42:05.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"zh",
"bo",
"kk",
"ko",
"mn",
"ug",
"yue",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | hfl | null | hfl/cino-small-v2 | 46 | 1 | transformers | 6,173 | ---
language:
- zh
- bo
- kk
- ko
- mn
- ug
- yue
license: "apache-2.0"
---
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)
Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.
We have seen rapid progress on building multilingual PLMs in recent year.
However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.
To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as
- Chinese,中文(zh)
- Tibetan,藏语(bo)
- Mongolian (Uighur form),蒙语(mn)
- Uyghur,维吾尔语(ug)
- Kazakh (Arabic form),哈萨克语(kk)
- Korean,朝鲜语(ko)
- Zhuang,壮语
- Cantonese,粤语(yue)
Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM
You may also interested in,
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
|
huggingartists/taylor-swift | c65013202368a1ace4d1bd2f9a0f5274a6b4ac42 | 2022-07-11T13:52:52.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/taylor-swift",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
] | text-generation | false | huggingartists | null | huggingartists/taylor-swift | 46 | null | transformers | 6,174 | ---
language: en
datasets:
- huggingartists/taylor-swift
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/721a6c465a666419bf286b473287c33f.446x446x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Taylor Swift</div>
<a href="https://genius.com/artists/taylor-swift">
<div style="text-align: center; font-size: 14px;">@taylor-swift</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Taylor Swift.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/taylor-swift).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/taylor-swift")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2l84tzp2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Taylor Swift's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1hy7aa65) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1hy7aa65/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/taylor-swift')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/taylor-swift")
model = AutoModelWithLMHead.from_pretrained("huggingartists/taylor-swift")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
jcblaise/electra-tagalog-base-uncased-discriminator | 82f1d59afc70abf3072cb46eed109dc0f2f397af | 2021-11-12T03:23:51.000Z | [
"pytorch",
"electra",
"pretraining",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0"
] | null | false | jcblaise | null | jcblaise/electra-tagalog-base-uncased-discriminator | 46 | null | transformers | 6,175 | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# ELECTRA Tagalog Base Uncased Discriminator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jkulhanek/augpt-mw-20 | 1b245b111d5554c78b3c82d28bd903b20070df8c | 2021-05-23T05:57:45.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | jkulhanek | null | jkulhanek/augpt-mw-20 | 46 | null | transformers | 6,176 | Entry not found |
megagonlabs/transformers-ud-japanese-electra-base-ginza | 5e3e4cf1fd0c0e5c15f2f1a778484883e7c25bfc | 2021-09-22T09:00:17.000Z | [
"pytorch",
"electra",
"pretraining",
"ja",
"dataset:mC4 Japanese",
"arxiv:1910.10683",
"transformers",
"license:mit"
] | null | false | megagonlabs | null | megagonlabs/transformers-ud-japanese-electra-base-ginza | 46 | 1 | transformers | 6,177 | ---
language: ja
license: mit
datasets:
- mC4 Japanese
---
# transformers-ud-japanese-electra-ginza (sudachitra-wordpiece, mC4 Japanese)
This is an [ELECTRA](https://github.com/google-research/electra) model pretrained on approximately 200M Japanese sentences extracted from the [mC4](https://huggingface.co/datasets/mc4) and finetuned by [spaCy v3](https://spacy.io/usage/v3) on [UD\_Japanese\_BCCWJ r2.8](https://universaldependencies.org/treebanks/ja_bccwj/index.html).
The base pretrain model is [megagonlabs/transformers-ud-japanese-electra-base-discrimininator](https://huggingface.co/megagonlabs/transformers-ud-japanese-electra-base-discriminator), which requires [SudachiTra](https://github.com/WorksApplications/SudachiTra) for tokenization.
The entire spaCy v3 model is distributed as a python package named [`ja_ginza_electra`](https://pypi.org/project/ja-ginza-electra/) from PyPI along with [`GiNZA v5`](https://github.com/megagonlabs/ginza) which provides some custom pipeline components to recognize the Japanese bunsetu-phrase structures.
Try running it as follows:
```console
$ pip install ginza ja-ginza-electra
$ ginza
```
## Licenses
The models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
## Acknowledgments
This model is permitted to be published under the `MIT License` under a joint research agreement between `NINJAL` (National Institute for Japanese Language and Linguistics) and `Megagon Labs Tokyo`.
## Citations
- [mC4](https://huggingface.co/datasets/mc4)
Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
- [UD\_Japanese\_BCCWJ r2.8](https://universaldependencies.org/treebanks/ja_bccwj/index.html)
```
Asahara, M., Kanayama, H., Tanaka, T., Miyao, Y., Uematsu, S., Mori, S.,
Matsumoto, Y., Omura, M., & Murawaki, Y. (2018).
Universal Dependencies Version 2 for Japanese.
In LREC-2018.
``` |
pdelobelle/robBERT-base | b4336a12103e60b43e0737167355f603ed5e2666 | 2021-05-20T19:16:19.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pdelobelle | null | pdelobelle/robBERT-base | 46 | null | transformers | 6,178 | Entry not found |
pucpr/bioBERTpt-squad-v1.1-portuguese | 972918d7b7b6ed71a752276a30d71f7e9654471a | 2021-05-20T03:08:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"pt",
"transformers",
"bioBERTpt",
"autotrain_compatible"
] | question-answering | false | pucpr | null | pucpr/bioBERTpt-squad-v1.1-portuguese | 46 | 5 | transformers | 6,179 | ---
language: pt
tags:
- question-answering
- bert
- bioBERTpt
- pytorch
metrics:
- squad
widget:
- text: "O que é AVC?"
context: "O AVC (Acidente vascular cerebral) é a segunda principal causa de morte no Brasil e a principal causa de incapacidade em adultos, retirando do mercado de trabalho milhares de brasileiros. A cada 5 minutos ocorre uma morte por AVC em nosso país. Ele é uma alteração súbita na circulação de sangue em alguma região encéfalo (composto pelo cérebro, cerebelo e tronco encefálico)."
- text: "O que significa a sigla AVC?"
context: "O AVC (Acidente vascular cerebral) é a segunda principal causa de morte no Brasil e a principal causa de incapacidade em adultos, retirando do mercado de trabalho milhares de brasileiros. A cada 5 minutos ocorre uma morte por AVC em nosso país. Ele é uma alteração súbita na circulação de sangue em alguma região encéfalo (composto pelo cérebro, cerebelo e tronco encefálico)."
- text: "Do que a região do encéfalo é composta?"
context: "O AVC (Acidente vascular cerebral) é a segunda principal causa de morte no Brasil e a principal causa de incapacidade em adultos, retirando do mercado de trabalho milhares de brasileiros. A cada 5 minutos ocorre uma morte por AVC em nosso país. Ele é uma alteração súbita na circulação de sangue em alguma região encéfalo (composto pelo cérebro, cerebelo e tronco encefálico)."
- text: "O que causa a interrupção do oxigênio?"
context: "O oxigênio é um elemento essencial para a atividade normal do nosso corpo; ele juntamente com os nutrientes são transportados pelo sangue, através das nossas artérias, estas funcionam como mangueiras direcionando o sangue para regiões específicas. Quando esse transporte é impedido e o oxigênio não chega as áreas necessárias parte do encéfalo não consegue obter o sangue (e oxigênio) de que precisa, então ele e as células sofrem lesão ou morrem. Essa interrupção pode ser causada por duas razões, um entupimento ou um vazamento nas artérias. desta forma temos dois tipos de AVC."
---
# BioBERTpt-squad-v1.1-portuguese for QA (Question Answering)
This is a clinical and biomedical model trained with generic QA questions. This model was finetuned on SQUAD v1.1, with the dataset SQUAD v1.1 in portuguese, from the Deep Learning Brasil group on Google Colab. See more details [here](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese).
## Performance
The results obtained are the following:
```
f1 = 80.06
exact match = 67.52
```
## See more
Our repo: https://github.com/HAILab-PUCPR/ |
shahukareem/wav2vec2-large-xlsr-53-dhivehi | 4a8b27a97dd143558a0feaa6164228ac708a7da3 | 2021-03-28T08:47:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dv",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shahukareem | null | shahukareem/wav2vec2-large-xlsr-53-dhivehi | 46 | null | transformers | 6,180 | ---
language: dv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Shahu Kareem XLSR Wav2Vec2 Large 53 Dhivehi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice dv
type: common_voice
args: dv
metrics:
- name: Test WER
type: wer
value: 32.85
---
# Wav2Vec2-Large-XLSR-53-Dhivehi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dhivehi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "dv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi")
model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Dhivehi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "dv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi")
model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\،\.\؟\!\'\"\–\’]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 32.85%
## Training
The Common Voice `train` and `validation` datasets were used for training.
## Example predictions
```--
reference: ކަރަންޓް ވައިރުކޮށް ބޮކި ހަރުކުރުން
predicted: ކަރަންޓް ވައިރުކޮށް ބޮކި ހަރުކުރުން
--
reference: ދެން އެކުދިންނާ ދިމާއަށް އަތް ދިށްކޮށްލެވެ
predicted: ދެން އެކުދިންނާ ދިމާއަށް އަތް ދިއްކޮށްލެވެ ް
--
reference: ރަކި ހިނިތުންވުމަކާއެކު އޭނާ އަމިއްލައަށް ތައާރަފްވި
predicted: ރަކި ހިނިތުންވުމަކާއެކު އޭނާ އަމިއްލައަށް ތައަރަފްވި
--
reference: ކޮޓަރީގެ ކުޑަދޮރުން ބޭރު ބަލަހައްޓައިގެން އިން ރޫނާގެ މޫނުމަތިން ފާޅުވަމުން ދިޔައީ ކަންބޮޑުވުމުގެ އަސަރުތައް
predicted: ކޮޓަރީގެ ކުޑަދޮރުން ބޭރު ބަލަހައްޓައިގެން އިން ރނާގެ މޫނުމަތިން ފާޅުވަމުން ދިޔައީ ކަންބޮޑުވުމުގެ އަސަރުތައް
--
``` |
tsdocode/text-to-sql | 1c92c784b2c568e9eb9915ffbdb1d3a15e066738 | 2021-09-03T06:21:03.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | tsdocode | null | tsdocode/text-to-sql | 46 | 1 | transformers | 6,181 | Simple text to SQL |
biu-nlp/contextualizer_qasrl | 8062db2421fe9d8358be48b8c13349fef335622b | 2022-04-13T20:41:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | biu-nlp | null | biu-nlp/contextualizer_qasrl | 46 | null | transformers | 6,182 | ---
license: mit
---
|
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_42 | dccb43de740f58cace45dee142dc20575349ad16 | 2022-05-10T23:20:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_42 | 46 | null | transformers | 6,183 | Entry not found |
sanjay-m1/informal-to-formal | d7abd7b10df02aaad7fce872942c93bf1b92debc | 2022-05-21T16:57:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sanjay-m1 | null | sanjay-m1/informal-to-formal | 46 | null | transformers | 6,184 | ## This model belongs to the Styleformer project
[Please refer to github page](https://github.com/PrithivirajDamodaran/Styleformer)
|
deepparag/Aeona-Beta | 94f894e68388ad7db9cfb43a489ab6132892bc1f | 2022-07-26T00:23:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | deepparag | null | deepparag/Aeona-Beta | 46 | 1 | transformers | 6,185 | ---
thumbnail: https://images-ext-2.discordapp.net/external/Wvtx1L98EbA7DR2lpZPbDxDuO4qmKt03nZygATZtXgk/%3Fsize%3D4096/https/cdn.discordapp.com/avatars/931226824753700934/338a9e413bbceaeb9095a29e97d4fac0.png
tags:
- conversational
license: mit
---
# Aeona | Chatbot

An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
Recommended to use along with an [AIML Chatbot](https://github.com/deepsarda/Aeona-Aiml) to reduce load, get better replies, add name and personality to your bot.
Using an AIML Chatbot will allow you to hardcode some replies also.
# AEONA
Aeona is an chatbot which hope's to be able to talk with humans as if its an friend!
It's main target platform is discord.
You can invite the bot [here](https://aeona.xyz).
To learn more about this project and chat with the ai, you can use this [website](https://aeona.xyx/).
Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user.
## Goals
The goal is to create an AI which will work with AIML in order to create the most human like AI.
#### Why not an AI on its own?
For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code!
The goal of the AI is to generate responses where the AIML fails.
Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible!
So we use 3 dataset:-
1. [Movielines](https://www.kaggle.com/Cornell-University/movie-dialog-corpus) The movie lines promote longer and more thought out responses but it can be very random. About 200k lines!
2. [Discord Messages](https://www.kaggle.com/jef1056/discord-data) The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages!
3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!
## Training
The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated.
This leads to them covering each others issues!
The AI has a context of 6 messages which means it will reply until the 4th message from user.
[Example](https://huggingface.co/deepparag/Aeona-Beta/discussions/1)
## Tips for Hugging Face interference
I recommend send the user input,
previous 3 AI and human responses.
Using more context than this will lead to useless responses but using less is alright but the responses may be random.
## Evaluation
Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics.
| Model | Perplexity |
|---|---|
| Seq2seq Baseline [3] | 29.8 |
| Wolf et al. [5] | 16.3 |
| GPT-2 baseline | 99.5 |
| DialoGPT baseline | 56.6 |
| DialoGPT finetuned | 11.4 |
| PersonaGPT | 10.2 |
| **Aeona** | **7.9** |
## Usage
Example:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deepparag/Aeona")
model = AutoModelWithLMHead.from_pretrained("deepparag/Aeona")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=4,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Aeona: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
mgonnav/finetuning-pysentimiento-war-tweets | db4407ee635fd9a249ea7332c00fa602e91c0610 | 2022-07-11T03:33:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | mgonnav | null | mgonnav/finetuning-pysentimiento-war-tweets | 46 | null | transformers | 6,186 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-pysentimiento-war-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-pysentimiento-war-tweets
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on a dataset of 1500 tweets from Peruvian accounts. It achieves the following results on the evaluation set:
- Loss: 1.7689
- Accuracy: 0.7378
- F1: 0.7456
## Model description
This model in a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) using five labels: **pro_russia**, **against_ukraine**, **neutral**, **against_russia**, **pro_ukraine**.
## Intended uses & limitations
This model shall be used to classify text (more specifically, Spanish tweets) as expressing a position concerning the Russo-Ukrainian war.
## Training and evaluation data
We used an 80/20 training/test split on the aforementioned dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
IlyaGusev/ru-word-stress-transformer | ce030218eac4c63a98aa4a7e1f6250279a686e24 | 2022-07-13T15:34:21.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"ru",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | IlyaGusev | null | IlyaGusev/ru-word-stress-transformer | 46 | null | transformers | 6,187 | ---
language:
- ru
tags:
- token-classification
license: apache-2.0
inference: false
---
# RuWordStressTransformer
## Model description
Transformer encoder for predicting word stress in Russian.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
model_name = "IlyaGusev/ru-word-stress-transformer"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True,
revision="3400828"
)
model = AutoModelForTokenClassification.from_pretrained(model_name)
pipe = pipeline(
"token-classification",
model=model,
tokenizer=tokenizer,
device=-1,
aggregation_strategy="none",
ignore_labels=("NO",)
)
print(pipe("щеколда"))
``` |
duchung17/wav2vec2-base-vivos | 42688d1be97d5fd67f2b823ae0ea1d45213dc43c | 2022-07-12T08:19:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | duchung17 | null | duchung17/wav2vec2-base-vivos | 46 | null | transformers | 6,188 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-vivos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vivos
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4977
- Wer: 0.3249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3297 | 2.0 | 500 | 2.6466 | 1.0022 |
| 0.895 | 4.0 | 1000 | 0.4831 | 0.4882 |
| 0.4523 | 6.0 | 1500 | 0.4175 | 0.4201 |
| 0.3638 | 8.0 | 2000 | 0.4043 | 0.3913 |
| 0.3086 | 10.0 | 2500 | 0.4165 | 0.3847 |
| 0.2744 | 12.0 | 3000 | 0.4035 | 0.3639 |
| 0.2464 | 14.0 | 3500 | 0.4226 | 0.3595 |
| 0.2182 | 16.0 | 4000 | 0.4392 | 0.3485 |
| 0.197 | 18.0 | 4500 | 0.4512 | 0.3482 |
| 0.1803 | 20.0 | 5000 | 0.4476 | 0.3368 |
| 0.1626 | 22.0 | 5500 | 0.4684 | 0.3392 |
| 0.1522 | 24.0 | 6000 | 0.4792 | 0.3328 |
| 0.1418 | 26.0 | 6500 | 0.4716 | 0.3241 |
| 0.1317 | 28.0 | 7000 | 0.4988 | 0.3252 |
| 0.127 | 30.0 | 7500 | 0.4977 | 0.3249 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Billwzl/20split_dataset | 5e544b6dfb1a29cc09ca803790c365cf513d4978 | 2022-07-14T03:21:48.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Billwzl | null | Billwzl/20split_dataset | 46 | 1 | transformers | 6,189 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20split_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20split_dataset
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5971 | 1.0 | 11851 | 2.3479 |
| 2.3773 | 2.0 | 23702 | 2.2446 |
| 2.2663 | 3.0 | 35553 | 2.1630 |
| 2.1842 | 4.0 | 47404 | 2.1059 |
| 2.1145 | 5.0 | 59255 | 2.0626 |
| 2.0652 | 6.0 | 71106 | 2.0446 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
NAACL2022/spider-trivia-ctx-encoder | 535fe181313ccc8ed9e24071886b00f89e3f00a3 | 2022-07-09T19:19:59.000Z | [
"pytorch",
"dpr",
"arxiv:2112.07708",
"transformers"
] | null | false | NAACL2022 | null | NAACL2022/spider-trivia-ctx-encoder | 46 | 4 | transformers | 6,190 | # Spider-TriviaQA: Context Encoder
This is the context encoder of the model fine-tuned on TriviaQA (and initialized from Spider) discussed in our paper [Learning to Retrieve Passages without Supervision](https://arxiv.org/abs/2112.07708).
## Usage
We used weight sharing for the query encoder and passage encoder, so the same model should be applied for both.
**Note**! We format the passages similar to DPR, i.e. the title and the text are separated by a `[SEP]` token, but token
type ids are all 0-s.
An example usage:
```python
from transformers import AutoTokenizer, DPRContextEncoder
tokenizer = AutoTokenizer.from_pretrained("NAACL2022/spider-trivia-ctx-encoder")
model = DPRContextEncoder.from_pretrained("NAACL2022/spider-trivia-ctx-encoder")
title = "Sauron"
context = "Sauron is the title character and main antagonist of J. R. R. Tolkien's \"The Lord of the Rings\"."
input_dict = tokenizer(title, context, return_tensors="pt")
del input_dict["token_type_ids"]
outputs = model(**input_dict)
```
|
DHBaek/gpt2-stackoverflow-question-contents-generator | 455d4f2115745affb720ad973f13a72413d8d668 | 2021-06-15T02:18:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | DHBaek | null | DHBaek/gpt2-stackoverflow-question-contents-generator | 45 | null | transformers | 6,191 | Entry not found |
Helsinki-NLP/opus-mt-pt-uk | 911074b88a1092b3d0a7dff4d8d02ee5571127d2 | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pt",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pt-uk | 45 | null | transformers | 6,192 | ---
language:
- pt
- uk
tags:
- translation
license: apache-2.0
---
### por-ukr
* source group: Portuguese
* target group: Ukrainian
* OPUS readme: [por-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-ukr/README.md)
* model: transformer-align
* source language(s): por
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.ukr | 39.8 | 0.616 |
### System Info:
- hf_name: por-ukr
- source_languages: por
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'uk']
- src_constituents: {'por'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: ukr
- short_pair: pt-uk
- chrF2_score: 0.616
- bleu: 39.8
- brevity_penalty: 0.9990000000000001
- ref_len: 18933.0
- src_name: Portuguese
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: uk
- prefer_old: False
- long_pair: por-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
LegolasTheElf/Wav2Vec2_XLSR_Bengali_V3 | 8a21e35a22df68e076f974b5067c1af3a241a858 | 2022-01-26T14:29:33.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | LegolasTheElf | null | LegolasTheElf/Wav2Vec2_XLSR_Bengali_V3 | 45 | null | transformers | 6,193 | Entry not found |
MoritzLaurer/covid-policy-roberta-21 | 0ebcdb512ccb44e3b3a4c4e30168b7d59cf1309e | 2021-05-20T12:11:07.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"transformers"
] | text-classification | false | MoritzLaurer | null | MoritzLaurer/covid-policy-roberta-21 | 45 | 1 | transformers | 6,194 | ---
language:
- en
tags:
- text-classification
metrics:
- accuracy (balanced)
- F1 (weighted)
widget:
- text: "All non-essential work activity will stop in Spain from tomorrow until 9 April but there is some confusion as to which jobs can continue under the new lockdown restrictions"
---
# Covid-Policy-RoBERTa-21
This model is currently in development at the Centre for European Policy Studies (CEPS).
The model is not yet recommended for use. A more detailed description will follow.
If you are interested in using deep learning to identify 20 different types policy measures against COVID-19 in text (NPIs, "non-pharmaceutical interventions") don't hesitate to [contact me](https://www.ceps.eu/ceps-staff/moritz-laurer/). |
ReynaQuita/twitter_disaster_bert_large | 43d82dca4c9ac69682c359530978652c4ade4908 | 2021-11-01T08:03:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ReynaQuita | null | ReynaQuita/twitter_disaster_bert_large | 45 | null | transformers | 6,195 | Entry not found |
SetFit/deberta-v3-large__sst2__train-16-8 | d47e833d4ad44464e2aa0be2208d4793beed093f | 2022-02-10T11:15:56.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-8 | 45 | null | transformers | 6,196 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-8
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6915
- Accuracy: 0.6579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7129 | 1.0 | 7 | 0.7309 | 0.2857 |
| 0.6549 | 2.0 | 14 | 0.7316 | 0.4286 |
| 0.621 | 3.0 | 21 | 0.7131 | 0.5714 |
| 0.3472 | 4.0 | 28 | 0.5703 | 0.4286 |
| 0.2041 | 5.0 | 35 | 0.6675 | 0.5714 |
| 0.031 | 6.0 | 42 | 1.6750 | 0.5714 |
| 0.0141 | 7.0 | 49 | 1.8743 | 0.5714 |
| 0.0055 | 8.0 | 56 | 1.1778 | 0.5714 |
| 0.0024 | 9.0 | 63 | 1.0699 | 0.5714 |
| 0.0019 | 10.0 | 70 | 1.0933 | 0.5714 |
| 0.0012 | 11.0 | 77 | 1.1218 | 0.7143 |
| 0.0007 | 12.0 | 84 | 1.1468 | 0.7143 |
| 0.0006 | 13.0 | 91 | 1.1584 | 0.7143 |
| 0.0006 | 14.0 | 98 | 1.3092 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
anindabitm/sagemaker-BioclinicalBERT-ADR | 98e337743a736257aedadf210f293104cfeb4d82 | 2021-11-18T19:24:42.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:ade_corpus_v2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | anindabitm | null | anindabitm/sagemaker-BioclinicalBERT-ADR | 45 | null | transformers | 6,197 | ---
tags:
- generated_from_trainer
datasets:
- ade_corpus_v2
model-index:
- name: sagemaker-BioclinicalBERT-ADR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-BioclinicalBERT-ADR
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the ade_corpus_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 171 | 0.9441 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
benjamin/gpt2-wechsel-french | 81aa3fe79c2b6b714b1fc460b4d9609153338847 | 2022-07-13T23:44:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"fr",
"transformers",
"license:mit"
] | text-generation | false | benjamin | null | benjamin/gpt2-wechsel-french | 45 | null | transformers | 6,198 | ---
language: fr
license: mit
---
# gpt2-wechsel-french
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
browndw/docusco-bert | 1cd3532231d21577c5cb1bc14f0a991d6f803717 | 2022-07-22T22:10:46.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"en",
"dataset:COCA",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible"
] | token-classification | false | browndw | null | browndw/docusco-bert | 45 | null | transformers | 6,199 | ---
language: en
datasets: COCA
---
# docusco-bert
## Model description
**docusco-bert** is a fine-tuned BERT model that is ready to use for **token classification**. The model was trained on data sampled from the Corpus of Contemporary American English ([COCA](https://www.english-corpora.org/coca/)) and classifies tokens and token sequences according to a system developed for the [**DocuScope**](https://www.cmu.edu/dietrich/english/research-and-publications/docuscope.html) dictionary-based tagger. Descriptions of the categories are included in a table below.
## About DocuScope
DocuScope is a dicitonary-based tagger that has been developed at Carnegie Mellon University by **David Kaufer** and **Suguru Ishizaki** since the early 2000s. Its categories are rhetorical in their orientation (as opposed to part-of-speech tags, for example, which are morphosyntactic).
DocuScope has been been used in [a wide variety of studies](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=docuscope&btnG=). Here, for example, is [a short analysis of King Lear](https://graphics.cs.wisc.edu/WP/vep/2017/02/14/guest-post-data-mining-king-lear/), and here is [a published study of Tweets](https://journals.sagepub.com/doi/full/10.1177/2055207619844865).
## Intended uses & limitations
#### How to use
The model was trained on data with tags formatted using [IOB](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)), like those used in common tasks like Named Entity Recogition (NER). Thus, you can use this model with a Transformers NER *pipeline*.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("browndw/docusco-bert")
model = AutoModelForTokenClassification.from_pretrained("browndw/docusco-bert")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Globalization is the process of interaction and integration among people, companies, and governments worldwide."
ds_results = nlp(example)
print(ds_results)
```
#### Limitations and bias
This model is limited by its training dataset of American English texts. Moreover, the current version is trained on only a small subset of the corpus. The goal is to train later versions on more data, which should increase accuracy.
## Training data
This model was fine-tuned on data from the Corpus of Contemporary American English ([COCA](https://www.english-corpora.org/coca/)). The training data contain chunks of text randomly sampled of 5 text-types: Academic, Fiction, Magazine, News, and Spoken.
Typically, BERT models are trained on sentence segments. However, DocuScope tags can span setences. Thus, data were split into chunks that don't split **B + I** sequences and end with sentence-final punctuation marks (i.e., period, quesiton mark or exclamaiton point).
Additionally, the order of the chunks was randomized prior to sampling, and statified sampling was used to provide enough training data for low-frequency caegories. The resulting training data consist of:
* 21,460,177 tokens
* 15,796,305 chunks
The specific counts for each category appear in the following table.
Category|Count
-|-
O|3528038
Syntactic Complexity|2032808
Character|1413771
Description|1224744
Narrative|1159201
Negative|651012
Academic Terms|620932
Interactive|594908
Information Exposition|578228
Positive|463914
Force Stressed|432631
Information Topics|394155
First Person|249744
Metadiscourse Cohesive|240822
Strategic|238255
Public Terms|234213
Reasoning|213775
Information Place|187249
Information States|173146
Information ReportVerbs|119092
Confidence High|112861
Confidence Hedged|110008
Future|96101
Inquiry|94995
Contingent|94860
Information Change|89063
Metadiscourse Interactive|84033
Updates|81424
Citation|71241
Facilitate|50451
Uncertainty|35644
Academic WritingMoves|29352
Information ChangePositive|28475
Responsibility|25362
Citation Authority|22414
Information ChangeNegative|15612
Confidence Low|2876
Citation Hedged|895
-|-
Total|15796305
## Training procedure
This model was trained on a single 2.3 GHz Dual-Core Intel Core i5 with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805).
## Eval results
### Overall
metric|test
-|-
f1 |.927
accuracy |.943
### By category
category|precision|recall|f1-score|support
-|-|-|-|-
AcademicTerms|0.91|0.92|0.92|486399
AcademicWritingMoves|0.76|0.82|0.79|20017
Character|0.94|0.95|0.94|1260272
Citation|0.92|0.94|0.93|50812
CitationAuthority|0.86|0.88|0.87|17798
CitationHedged|0.91|0.94|0.92|632
ConfidenceHedged|0.94|0.96|0.95|90393
ConfidenceHigh|0.92|0.94|0.93|113569
ConfidenceLow|0.79|0.81|0.80|2556
Contingent|0.92|0.94|0.93|81366
Description|0.87|0.89|0.88|1098598
Facilitate|0.87|0.90|0.89|41760
FirstPerson|0.96|0.98|0.97|330658
ForceStressed|0.93|0.94|0.93|436188
Future|0.90|0.93|0.92|93365
InformationChange|0.88|0.91|0.89|72813
InformationChangeNegative|0.83|0.85|0.84|12740
InformationChangePositive|0.82|0.86|0.84|22994
InformationExposition|0.94|0.95|0.95|468078
InformationPlace|0.95|0.96|0.96|147688
InformationReportVerbs|0.91|0.93|0.92|95563
InformationStates|0.95|0.95|0.95|139429
InformationTopics|0.90|0.92|0.91|328152
Inquiry|0.85|0.89|0.87|79030
Interactive|0.95|0.96|0.95|602857
MetadiscourseCohesive|0.97|0.98|0.98|195548
MetadiscourseInteractive|0.92|0.94|0.93|73159
Narrative|0.92|0.94|0.93|1023452
Negative|0.88|0.89|0.88|645810
Positive|0.87|0.89|0.88|409775
PublicTerms|0.91|0.92|0.91|184108
Reasoning|0.93|0.95|0.94|169208
Responsibility|0.83|0.87|0.85|21819
Strategic|0.88|0.90|0.89|193768
SyntacticComplexity|0.95|0.96|0.96|1635918
Uncertainty|0.87|0.91|0.89|33684
Updates|0.91|0.93|0.92|77760
-|-|-|-|-
micro avg|0.92|0.93|0.93|10757736
macro avg|0.90|0.92|0.91|10757736
weighted avg|0.92|0.93|0.93|10757736
## DocuScope Category Descriptions
Category (Cluster)|Description|Examples
-|-|-
Academic Terms|Abstract, rare, specialized, or disciplinary-specific terms that are indicative of informationally dense writing|*market price*, *storage capacity*, *regulatory*, *distribution*
Academic Writing Moves|Phrases and terms that indicate academic writing moves, which are common in research genres and are derived from the work of Swales (1981) and Cotos et al. (2015, 2017)|*in the first section*, *the problem is that*, *payment methodology*, *point of contention*
Character|References multiple dimensions of a character or human being as a social agent, both individual and collective|*Pauline*, *her*, *personnel*, *representatives*
Citation|Language that indicates the attribution of information to, or citation of, another source.|*according to*, *is proposing that*, *quotes from*
Citation Authorized|Referencing the citation of another source that is represented as true and not arguable|*confirm that*, *provide evidence*, *common sense*
Citation Hedged|Referencing the citation of another source that is presented as arguable|*suggest that*, *just one opinion*
Confidence Hedged|Referencing language that presents a claim as uncertain|*tends to get*, *maybe*, *it seems that*
Confidence High|Referencing language that presents a claim with certainty|*most likely*, *ensure that*, *know that*, *obviously*
Confidence Low|Referencing language that presents a claim as extremely unlikely|*unlikely*, *out of the question*, *impossible*
Contingent|Referencing contingency, typically contingency in the world, rather than contingency in one's knowledge|*subject to*, *if possible*, *just in case*, *hypothetically*
Description|Language that evokes sights, sounds, smells, touches and tastes, as well as scenes and objects|*stay quiet*, *gas-fired*, *solar panels*, *soft*, *on my desk*
Facilitate|Language that enables or directs one through specific tasks and actions|*let me*, *worth a try*, *I would suggest*
First Person|This cluster captures first person.|*I*, *as soon as I*, *we have been*
Force Stressed|Language that is forceful and stressed, often using emphatics, comparative forms, or superlative forms|*really good*, *the sooner the better*, *necessary*
Future|Referencing future actions, states, or desires|*will be*, *hope to*, *expected changes*
Information Change|Referencing changes of information, particularly changes that are more neutral|*changes*, *revised*, *growth*, *modification to*
Information Change Negative|Referencing negative change|*going downhill*, *slow erosion*, *get worse*
Information Change Positive|Referencing positive change|*improving*, *accrued interest*, *boost morale*
Information Exposition|Information in the form of expository devices, or language that describes or explains, frequently in regards to quantities and comparisons|*final amount*, *several*, *three*, *compare*, *80%*
Information Place|Language designating places|*the city*, *surrounding areas*, *Houston*, *home*
Information Report Verbs|Informational verbs and verb phrases of reporting|*report*, *posted*, *release*, *point out*
Information States|Referencing information states, or states of being|*is*, *are*, *existing*, *been*
Information Topics|Referencing topics, usually nominal subjects or objects, that indicate the “aboutness” of a text|*time*, *money*, *stock price*, *phone interview*
Inquiry|Referencing inquiry, or language that points to some kind of inquiry or investigation|*find out*, *let me know if you have any questions*, *wondering if*
Interactive|Addresses from the author to the reader or from persons in the text to other persons. The address comes in the language of everyday conversation, colloquy, exchange, questions, attention-getters, feedback, interactive genre markers, and the use of the second person.|*can you*, *thank you for*, *please see*, *sounds good to me*
Metadiscourse Cohesive|The use of words to build cohesive markers that help the reader navigate the text and signal linkages in the text, which are often additive or contrastive|*or*, *but*, *also*, *on the other hand*, *notwithstanding*, *that being said*
Metadiscourse Interactive|The use of words to build cohesive markers that interact with the reader|*I agree*, *let’s talk*, *by the way*
Narrative|Language that involves people, description, and events extending in time|*today*, *tomorrow*, *during the*, *this weekend*
Negative|Referencing dimensions of negativity, including negative acts, emotions, relations, and values|*does not*, *sorry for*, *problems*, *confusion*
Positive|Referencing dimensions of positivity, including actions, emotions, relations, and values|*thanks*, *approval*, *agreement*, *looks good*
Public Terms|Referencing public terms, concepts from public language, media, the language of authority, institutions, and responsibility|*discussion*, *amendment*, *corporation*, *authority*, *settlement*
Reasoning|Language that has a reasoning focus, supporting inferences about cause, consequence, generalization, concession, and linear inference either from premise to conclusion or conclusion to premise|*because*, *therefore*, *analysis*, *even if*, *as a result*, *indicating that*
Responsibility|Referencing the language of responsibility|*supposed to*, *requirements*, *obligations*
Strategic|This dimension is active when the text structures strategies activism, advantage-seeking, game-playing cognition, plans, and goal-seeking.|*plan*, *trying to*, *strategy*, *decision*, *coordinate*, *look at the*
Syntactic Complexity|The features in this category are often what are called “function words,” like determiners and prepositions.|*the*, *to*, *for*, *in*, *a lot of*
Uncertainty|References uncertainty, when confidence levels are unknown|*kind of*, *I have no idea*, *for some reason*
Updates|References updates that anticipate someone searching for information and receiving it|*already*, *a new*, *now that*, *here are some*
### BibTeX entry and citation info
```
@incollection{ishizaki2012computer,
title = {Computer-aided rhetorical analysis},
author = {Ishizaki, Suguru and Kaufer, David},
booktitle= {Applied natural language processing: Identification, investigation and resolution},
pages = {276--296},
year = {2012},
publisher= {IGI Global},
url = {https://www.igi-global.com/chapter/content/61054}
}
```
```
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.