modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
IIC/roberta-base-spanish-squades | e4941d7fc14e98b310699ab895874781edb4f4ef | 2022-04-02T15:10:43.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:squad_es",
"arxiv:2107.07253",
"transformers",
"model-index",
"autotrain_compatible"
] | question-answering | false | IIC | null | IIC/roberta-base-spanish-squades | 267 | 1 | transformers | 3,200 | ---
language:
- es
tags:
- question-answering # Example: audio
datasets:
- squad_es
metrics:
- f1
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: roberta-base-spanish_sqac
results:
- task:
type: question-answering # Required. Example: automatic-speech-recognition
name: question-answering # Optional. Example: Speech Recognition
dataset:
type: squad_es # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: squad_es # Required. Example: Common Voice zh-CN
args: es # Optional. Example: zh-CN
metrics:
- type: f1
value: 81.8
name: f1
---
This model was trained on the [SQUAD-ES](https://huggingface.co/datasets/squad_es) dataset. It is a question-answering dataset automatically translated from SQUAD to Spanish. As for the model, it is a fine-tuned version of [MarIA-Roberta](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne), a spanish roberta developed by BSC under the project MarIA.
For training the model, we followed the recommendations of the own authors in [their paper](https://arxiv.org/abs/2107.07253), performing a full grid search over the hyperparameter space provided in the paper, and selected the best model based on eval\_loss.
You can use the model like this:
```python
from transformers import RobertaTokenizer, RobertaForQuestionAnswering
import torch
tokenizer = RobertaTokenizer.from_pretrained("IIC/roberta-base-spanish-sqac")
model = RobertaForQuestionAnswering.from_pretrained("IIC/roberta-base-spanish-sqac")
question, text = "Quién es el padre de Luke Skywalker?", "En la famosa película, Darth Veider le dice a Luke Skywalker aquella frase que todos recordamos: yo soy tu padre."
inputs = tokenizer(question, text, return_tensors="pt")
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
0xDEADBEA7/DialoGPT-small-rick | c1d2dd6d26adb9a682148b406ffc50d73512f132 | 2022-02-22T05:30:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | 0xDEADBEA7 | null | 0xDEADBEA7/DialoGPT-small-rick | 266 | null | transformers | 3,201 | ---
tags:
- conversational
---
# Rick n Morty DialoGPT Model |
Kryptone/RinAI | f20239cb44ed6cb5f1afb77c071663c9d67ecc9b | 2021-10-05T18:00:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Kryptone | null | Kryptone/RinAI | 266 | null | transformers | 3,202 | ---
tags:
- conversational
---
# Rin chatbot |
Manthan/DialoGPT-small-harrypotter | 11a3d9ecc5ba8fc124dc364d623c79f04b4fab3d | 2021-09-09T14:55:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Manthan | null | Manthan/DialoGPT-small-harrypotter | 266 | null | transformers | 3,203 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Zeph/DialoGPT-small-rick | eda6c713e144f6977ea7cb6dd4d2e054b7a260b2 | 2021-09-03T09:00:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Zeph | null | Zeph/DialoGPT-small-rick | 266 | null | transformers | 3,204 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
deep-learning-analytics/automatic-title-generation | fd90569ff791e8ec0febba0f3ddc757e53d0e126 | 2022-01-23T18:42:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | deep-learning-analytics | null | deep-learning-analytics/automatic-title-generation | 266 | 1 | transformers | 3,205 | Entry not found |
jamestop00/DialoGPT-spike-medium | 11b8dbe4351ed92d82411d0e9c2881915d06d8dc | 2022-01-19T04:39:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jamestop00 | null | jamestop00/DialoGPT-spike-medium | 266 | null | transformers | 3,206 | ---
tags:
- conversational
---
# Spike DialoGPT Model |
zenham/mskeen_m_e4_16h | 515ecd220f0b286f31e4ff87fce7eb1fe4abbbb0 | 2022-03-08T00:18:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | zenham | null | zenham/mskeen_m_e4_16h | 266 | null | transformers | 3,207 | ---
tags:
- conversational
---
#mskeen m e4 16h 0k DialoGPT Model |
EuropeanTurtle/DialoGPT-small-mrcobb | 8f462a887ae90b382222d845b8e9552c65004c06 | 2021-11-13T10:14:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | EuropeanTurtle | null | EuropeanTurtle/DialoGPT-small-mrcobb | 265 | null | transformers | 3,208 | ---
tags:
- conversational
---
# MrCobb DialoGPT Model |
Geezy/DialoGPT-small-guy | c6e9d240f646e3adccb6b017f7bb1189aaa82729 | 2021-08-31T15:29:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Geezy | null | Geezy/DialoGPT-small-guy | 265 | null | transformers | 3,209 | ---
tags:
- conversational
---
# Guy DialoGPT Model |
Mona/DialoGPT-small-harrypotter | 378e619df2eab5778cbb7b0c0025a274dc21ffc5 | 2021-10-07T11:34:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Mona | null | Mona/DialoGPT-small-harrypotter | 265 | null | transformers | 3,210 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
NikhilKrishna/DialoGPT-medium-harrypotter | dc2fdb19a57751903888b7e51917aad5e81c0ce5 | 2022-01-02T08:01:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | NikhilKrishna | null | NikhilKrishna/DialoGPT-medium-harrypotter | 265 | null | transformers | 3,211 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
jaynlp/t5-large-samsum | 31c2f3e0c2b8ed603bb9f54c894a1610f0259600 | 2022-02-17T11:09:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jaynlp | null | jaynlp/t5-large-samsum | 265 | 1 | transformers | 3,212 | Pre-trained t5-large on SAMSuM Dialogue Summarization corpus.
Used the following prompt
```
Summarize this dialogue:
<DIALOGUE>
...
``` |
projecte-aina/m2m100_418M_ft_ca_zh | 03846cad055f4566b26372ebbfe3b062af83c95b | 2022-07-25T06:47:28.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"ca",
"zh",
"dataset:projecte-aina/ca_zh_wikipedia",
"transformers",
"translation",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | projecte-aina | null | projecte-aina/m2m100_418M_ft_ca_zh | 265 | null | transformers | 3,213 | ---
inference: false
license: cc-by-4.0
language:
- ca
- zh
tags:
- translation
datasets:
- projecte-aina/ca_zh_wikipedia
metrics:
- "bleu"
model-index:
- name: m2m100_418M_ft_zh_ca
results:
- task:
type: translation
dataset:
type: flores
name: Flores
metrics:
- name: BLEU
type: bleu
value: 24.9
---
## m2m100 fine-tuned on the ca_zh_wikipedia dataset for machine translation
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
## Model description
This model was obtained by fine-tuning the [m2m100_418M](https://huggingface.co/facebook/m2m100_418M) model on a Ca-Zh machine translation task with the [ca_zh_wikipedia](https://huggingface.co/datasets/projecte-aina/ca_zh_wikipedia) dataset that has been created along with the model. We also evaluate it on a general-domain multilingual testset [Flores-101](https://github.com/facebookresearch/flores).
## Intended Uses and Limitations
You can use this model for machine translation from Catalan to Chinese.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("projecte-aina/m2m100_418M_ft_zh_ca")
model = AutoModelForSeq2SeqLM.from_pretrained("projecte-aina/m2m100_418M_ft_zh_ca")
```
## Training
### Training Data
As a data for fine-tuning we used the [ca_zh_wikipedia](https://huggingface.co/datasets/projecte-aina/ca_zh_wikipedia) dataset extracted from Wikipedia.
### Training Procedure
#### Tokenization
The original [m2m100_418M](https://huggingface.co/facebook/m2m100_418M) model's sentencepiece tokenizer was used. The fine-tuning dataset that contained both simplified and traditional Chinese was reduced to its simplified form.
#### Hyperparameters
The model was trained for 15 epochs with the default parameters and \\(LR = 2\mathrm{e}{-5}\\).
## Evaluation
### Variable and Metrics
We use the BLEU score for evaluation on test sets: [Flores-101](https://github.com/facebookresearch/flores).
### Evaluation Results
Below are the evaluation results on the machine translation from Catalan to Chinese compared with the original m2m100 on a testset: [Flores-101](https://github.com/facebookresearch/flores).
|Test set | Model | BLEU |
| ------------|-------------| -----|
|Flores-101 | m2m100 | 24.6 |
| | m2m100_418M_ft_ca_zh | **24.9** |
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
|
Lujia/backdoored_bert | 09110f3fc118b02f437c26b575c71df097b06664 | 2021-05-18T21:29:51.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Lujia | null | Lujia/backdoored_bert | 264 | null | transformers | 3,214 |
This model is created for research study which contains backdoor inside the model. Please use it for academic research, don't use it for business scenarios. |
debatelab/argument-analyst | 18d5adb1596ba0f7b3dcca0ba7450a1dd937d374 | 2021-12-06T12:23:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:debatelab/aaac",
"arxiv:2110.01509",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | debatelab | null | debatelab/argument-analyst | 264 | null | transformers | 3,215 | ---
language:
- "en"
license: "cc-by-sa-4.0"
datasets:
- debatelab/aaac
widget:
- text: "reason_statements: argument_source: If Peter likes fish, Peter has been to New York. So, Peter has been to New York."
example_title: "Premise identification"
- text: "argdown_reconstruction: argument_source: If Peter likes fish, Peter has been to New York. So, Peter has been to New York."
example_title: "Argdown reconstruction"
- text: "premises_formalized: reason_statements: If Peter likes fish, Peter has been to New York. (ref: (1))"
example_title: "Formalization"
inference:
parameters:
max_length: 80
---
Pretraining Dataset: [AAAC01](https://huggingface.co/datasets/debatelab/aaac)
Demo: [DeepA2 Demo](https://huggingface.co/spaces/debatelab/deepa2-demo)
Paper: [DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models](https://arxiv.org/abs/2110.01509)
Authors: *Gregor Betz, Kyle Richardson*
## Abstract
In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence. |
AbhilashDatta/T5_qgen-squad_v2 | b3c4358dda693920dc657a0bd54510977162a411 | 2022-05-31T02:44:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | AbhilashDatta | null | AbhilashDatta/T5_qgen-squad_v2 | 264 | 1 | transformers | 3,216 | ---
license: afl-3.0
---
# Question generation using T5 transformer trained on SQuAD
<h2> <i>Input format: context: "..." answer(optional): "..." </i></h2>
Import the pretrained model as well as tokenizer:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained('AbhilashDatta/T5_qgen-squad_v2')
tokenizer = T5Tokenizer.from_pretrained('AbhilashDatta/T5_qgen-squad_v2')
```
Then use the tokenizer to encode/decode and model to generate:
```
input = "context: My name is Abhilash Datta. answer: Abhilash"
batch = tokenizer(input, padding='longest', max_length=512, return_tensors='pt')
inputs_batch = batch['input_ids'][0]
inputs_batch = torch.unsqueeze(inputs_batch, 0)
ques_id = model.generate(inputs_batch, max_length=100, early_stopping=True)
ques_batch = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in ques_id]
print(ques_batch)
```
Output:
```
['what is my name']
``` |
Cryptikdw/DialoGPT-small-rick | 3b042c7925bf2da80def42da65a2d07a2d294228 | 2021-08-26T19:40:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Cryptikdw | null | Cryptikdw/DialoGPT-small-rick | 263 | null | transformers | 3,217 | ---
tags:
- conversational
---
#rick DialoGPT Model |
coderpotter/adversarial-paraphrasing-detector | 50d526fd495debacacff67ad7260c1791ebe5e1b | 2021-10-05T20:09:47.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | coderpotter | null | coderpotter/adversarial-paraphrasing-detector | 263 | 1 | transformers | 3,218 | This model is a paraphrase detector trained on the Adversarial Paraphrasing datasets described and used in this paper: https://aclanthology.org/2021.acl-long.552/.
Github repository: https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt.git
Please cite the following if you use this model:
```bib
@inproceedings{nighojkar-licato-2021-improving,
title = "Improving Paraphrase Detection with the Adversarial Paraphrasing Task",
author = "Nighojkar, Animesh and
Licato, John",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.552",
pages = "7106--7116",
abstract = "If two sentences have the same meaning, it should follow that they are equivalent in their inferential properties, i.e., each sentence should textually entail the other. However, many paraphrase datasets currently in widespread use rely on a sense of paraphrase based on word overlap and syntax. Can we teach them instead to identify paraphrases in a way that draws on the inferential properties of the sentences, and is not over-reliant on lexical and syntactic similarities of a sentence pair? We apply the adversarial paradigm to this question, and introduce a new adversarial method of dataset creation for paraphrase identification: the Adversarial Paraphrasing Task (APT), which asks participants to generate semantically equivalent (in the sense of mutually implicative) but lexically and syntactically disparate paraphrases. These sentence pairs can then be used both to test paraphrase identification models (which get barely random accuracy) and then improve their performance. To accelerate dataset generation, we explore automation of APT using T5, and show that the resulting dataset also improves accuracy. We discuss implications for paraphrase detection and release our dataset in the hope of making paraphrase detection models better able to detect sentence-level meaning equivalence.",
}
``` |
truthisneverlinear/EleventhDoctor | af3921ccce97dff1bd276441034c2a47620b52e8 | 2022-01-24T12:52:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | truthisneverlinear | null | truthisneverlinear/EleventhDoctor | 263 | null | transformers | 3,219 | ---
tags:
- conversational
---
# DialoGPT Model: Eleventh Doctor from Doctor Who
so many bugs and I can not fix them |
vinvino02/glpn-nyu | eec5f7782e1f9feab4c5e9726bdd8953c772934c | 2022-04-14T11:52:30.000Z | [
"pytorch",
"glpn",
"arxiv:2201.07436",
"transformers",
"vision",
"depth-estimation",
"license:apache-2.0"
] | null | false | vinvino02 | null | vinvino02/glpn-nyu | 263 | null | transformers | 3,220 | ---
license: apache-2.0
tags:
- vision
- depth-estimation
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# GLPN fine-tuned on NYUv2
Global-Local Path Networks (GLPN) model trained on NYUv2 for monocular depth estimation. It was introduced in the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Kim et al. and first released in [this repository](https://github.com/vinvino02/GLPDepth).
Disclaimer: The team releasing GLPN did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
GLPN uses SegFormer as backbone and adds a lightweight head on top for depth estimation.

## Intended uses & limitations
You can use the raw model for monocular depth estimation. See the [model hub](https://huggingface.co/models?search=glpn) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import GLPNFeatureExtractor, GLPNForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = GLPNFeatureExtractor.from_pretrained("vinvino02/glpn-nyu")
model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-nyu")
# prepare image for the model
inputs = feature_extractor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/glpn).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-07436,
author = {Doyeon Kim and
Woonghyun Ga and
Pyunghwan Ahn and
Donggyu Joo and
Sehwan Chun and
Junmo Kim},
title = {Global-Local Path Networks for Monocular Depth Estimation with Vertical
CutDepth},
journal = {CoRR},
volume = {abs/2201.07436},
year = {2022},
url = {https://arxiv.org/abs/2201.07436},
eprinttype = {arXiv},
eprint = {2201.07436},
timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-07436.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
microsoft/tapex-base-finetuned-wtq | 12b504c617464103bc4ad03be8bd5c7a40787f51 | 2022-07-14T10:12:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:wikitablequestions",
"arxiv:2107.07653",
"transformers",
"tapex",
"table-question-answering",
"license:mit",
"autotrain_compatible"
] | table-question-answering | false | microsoft | null | microsoft/tapex-base-finetuned-wtq | 263 | 1 | transformers | 3,221 | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
license: mit
---
# TAPEX (base-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) dataset.
## Intended Uses
You can use the model for table question answering on *complex* questions. Some **solveable** questions are shown below (corresponding tables now shown):
| Question | Answer |
|:---: |:---:|
| according to the table, what is the last title that spicy horse produced? | Akaneiro: Demon Hunters |
| what is the difference in runners-up from coleraine academical institution and royal school dungannon? | 20 |
| what were the first and last movies greenstreet acted in? | The Maltese Falcon, Malaya |
| in which olympic games did arasay thondike not finish in the top 20? | 2012 |
| which broadcaster hosted 3 titles but they had only 1 episode? | Channel 4 |
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base-finetuned-wtq")
model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-base-finetuned-wtq")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008.0']
```
### How to Eval
Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` |
LIAMF-USP/aristo-roberta | 15c9c738bb5ef55900c5dfc31965a8033d533f93 | 2021-05-20T12:04:27.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"multiple-choice",
"english",
"dataset:race",
"dataset:ai2_arc",
"dataset:openbookqa",
"transformers",
"license:mit"
] | multiple-choice | false | LIAMF-USP | null | LIAMF-USP/aristo-roberta | 262 | null | transformers | 3,222 | ---
language: "english"
license: "mit"
datasets:
- race
- ai2_arc
- openbookqa
metrics:
- accuracy
---
# Roberta Large Fine Tuned on RACE
## Model description
This model follows the implementation by Allen AI team about [Aristo Roberta V7 Model](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0) given in [ARC Challenge](https://leaderboard.allenai.org/arc/submissions/public)
#### How to use
```python
import datasets
from transformers import RobertaTokenizer
from transformers import RobertaForMultipleChoice
tokenizer = RobertaTokenizer.from_pretrained(
"LIAMF-USP/aristo-roberta")
model = RobertaForMultipleChoice.from_pretrained(
"LIAMF-USP/aristo-roberta")
dataset = datasets.load_dataset(
"arc",,
split=["train", "validation", "test"],
)
training_examples = dataset[0]
evaluation_examples = dataset[1]
test_examples = dataset[2]
example=training_examples[0]
example_id = example["example_id"]
question = example["question"]
label_example = example["answer"]
options = example["options"]
if label_example in ["A", "B", "C", "D", "E"]:
label_map = {label: i for i, label in enumerate(
["A", "B", "C", "D", "E"])}
elif label_example in ["1", "2", "3", "4", "5"]:
label_map = {label: i for i, label in enumerate(
["1", "2", "3", "4", "5"])}
else:
print(f"{label_example} not found")
while len(options) < 5:
empty_option = {}
empty_option['option_context'] = ''
empty_option['option_text'] = ''
options.append(empty_option)
choices_inputs = []
for ending_idx, option in enumerate(options):
ending = option["option_text"]
context = option["option_context"]
if question.find("_") != -1:
# fill in the banks questions
question_option = question.replace("_", ending)
else:
question_option = question + " " + ending
inputs = tokenizer(
context,
question_option,
add_special_tokens=True,
max_length=MAX_SEQ_LENGTH,
padding="max_length",
truncation=True,
return_overflowing_tokens=False,
)
if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0:
logging.warning(f"Question: {example_id} with option {ending_idx} was truncated")
choices_inputs.append(inputs)
label = label_map[label_example]
input_ids = [x["input_ids"] for x in choices_inputs]
attention_mask = (
[x["attention_mask"] for x in choices_inputs]
# as the senteces follow the same structure, just one of them is
# necessary to check
if "attention_mask" in choices_inputs[0]
else None
)
example_encoded = {
"example_id": example_id,
"input_ids": input_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
"label": label
}
output = model(**example_encoded)
```
## Training data
the Training data was the same as proposed [here](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0)
The only diferrence was the hypeparameters of RACE fine tuned model, which were reported [here](https://huggingface.co/LIAMF-USP/roberta-large-finetuned-race#eval-results)
## Training procedure
It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following:
| Hyperparameter | Value |
|:----:|:----:|
| adam_beta1 | 0.9 |
| adam_beta2 | 0.98 |
| adam_epsilon | 1.000e-8 |
| eval_batch_size | 16 |
| train_batch_size | 4 |
| fp16 | True |
| gradient_accumulation_steps | 4 |
| learning_rate | 0.00001 |
| warmup_steps | 0.06 |
| max_length | 256 |
| epochs | 4 |
The other parameters were the default ones from [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) and [Trainer Arguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
## Eval results:
| Dataset Acc | Challenge Test |
|:----:|:----:|
| | 65.358 |
**The model was trained with a TITAN RTX**
|
digitalepidemiologylab/covid-twitter-bert-v2-mnli | 234a09dbd327036e566d78c364feabe3bc86de61 | 2021-09-22T08:20:04.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:mnli",
"arxiv:1909.00161",
"transformers",
"Twitter",
"COVID-19",
"tensorflow",
"license:mit",
"zero-shot-classification"
] | zero-shot-classification | false | digitalepidemiologylab | null | digitalepidemiologylab/covid-twitter-bert-v2-mnli | 262 | null | transformers | 3,223 | ---
language:
- en
thumbnail: https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png
tags:
- Twitter
- COVID-19
- text-classification
- pytorch
- tensorflow
- bert
license: mit
datasets:
- mnli
pipeline_tag: zero-shot-classification
widget:
- text: To stop the pandemic it is important that everyone turns up for their shots.
candidate_labels: health, sport, vaccine, guns
---
# COVID-Twitter-BERT v2 MNLI
## Model description
This model provides a zero-shot classifier to be used in cases where it is not possible to finetune CT-BERT on a specific task, due to lack of labelled data.
The technique is based on [Yin et al.](https://arxiv.org/abs/1909.00161).
The article describes a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers.
The model is already finetuned on 400'000 generaic logical tasks.
We can then use it as a zero-shot classifier by reformulating the classification task as a question.
Let's say we want to classify COVID-tweets as vaccine-related and not vaccine-related.
The typical way would be to collect a few hunder pre-annotated tweets and organise them in two classes.
Then you would finetune the model on this.
With the zero-shot mnli-classifier, you can instead reformulate your question as "This text is about vaccines", and use this directly on inference - without any training.
Find more info about the model on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert).
## Usage
Please note that how you formulate the question can give slightly different results.
Collecting a training set and finetuning on this, will most likely give you better accuracy.
The easiest way to try this out is by using the Hugging Face pipeline.
This uses the default Enlish template where it puts the text "This example is " in front of the text.
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="digitalepidemiologylab/covid-twitter-bert-v2-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = 'To stop the pandemic it is important that everyone turns up for their shots.'
candidate_labels = ['health', 'sport', 'vaccine','guns']
hypothesis_template = 'This example is {}.'
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True)
```
## Training procedure
The model is finetuned on the 400k large [MNLI-task](https://cims.nyu.edu/~sbowman/multinli/).
## References
```bibtex
@article{muller2020covid,
title={COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter},
author={M{\"u}ller, Martin and Salath{\'e}, Marcel and Kummervold, Per E},
journal={arXiv preprint arXiv:2005.07503},
year={2020}
}
```
or
```
Martin Müller, Marcel Salathé, and Per E. Kummervold.
COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter.
arXiv preprint arXiv:2005.07503 (2020).
```
|
facebook/wav2vec2-large | dd5604b2476ee0c9a0efbe9a08ceb5d85afb9b01 | 2021-07-06T03:18:27.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"transformers",
"speech",
"license:apache-2.0"
] | null | false | facebook | null | facebook/wav2vec2-large | 262 | null | transformers | 3,224 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Large
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
jahz/DialoGPT-medium-FF8 | 8c6ea362c7d8a64d437fdb7d44242b0c819c58ca | 2021-09-20T09:08:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jahz | null | jahz/DialoGPT-medium-FF8 | 262 | null | transformers | 3,225 | ---
tags:
- conversational
---
# FF8 DialoGPT Model |
jalensmh/DialoGPT-medium-jalenbot | d2677e147d244b10148e8b7d771ca241fbd9f54d | 2021-09-01T22:55:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jalensmh | null | jalensmh/DialoGPT-medium-jalenbot | 262 | null | transformers | 3,226 | ---
tags:
- conversational
---
# jalenbot DialoGPT Model |
noobed/DialoGPT-small-astley | 45d0a1c6ff8e535a62eda9ac6707d320686a8da6 | 2021-08-30T13:39:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | noobed | null | noobed/DialoGPT-small-astley | 262 | null | transformers | 3,227 | ---
tags:
- conversational
---
# astley talks |
salesken/grammar_correction | ea2da84706074c51aafdc72c16d36f57ebd73c56 | 2021-05-23T12:26:50.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers",
"salesken",
"license:apache-2.0"
] | text-generation | false | salesken | null | salesken/grammar_correction | 262 | 3 | transformers | 3,228 | ---
tags: salesken
license: apache-2.0
inference: false
---
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, AutoModelForCausalLM
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else :
device = "cpu"
tokenizer = AutoTokenizer.from_pretrained("salesken/grammar_correction")
model = AutoModelForCausalLM.from_pretrained("salesken/grammar_correction").to(device)
input_query="what be the reason for everyone leave the company"
query= "<|startoftext|> " + input_query + " ~~~"
input_ids = tokenizer.encode(query.lower(), return_tensors='pt').to(device)
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=128,
temperature=0.9,
top_p= 0.7,
top_k = 5,
num_return_sequences=3)
corrected_sentences = []
for i in range(len(sample_outputs)):
r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0]
r = r.split('~~~')[1]
if r not in corrected_sentences:
corrected_sentences.append(r)
print(corrected_sentences)
```
|
neuralmagic/oBERT-3-downstream-dense-squadv1 | ae5c6167cb39b96f2e8bdceaca99dbd6a749a6d5 | 2022-06-20T11:36:51.000Z | [
"pytorch",
"en",
"dataset:squad",
"arxiv:2203.07259",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression"
] | null | false | neuralmagic | null | neuralmagic/oBERT-3-downstream-dense-squadv1 | 262 | null | null | 3,229 | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-dense-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models:
- 80% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1`
SQuADv1 dev-set:
```
EM = 76.62
F1 = 84.65
```
## BibTeX entry and citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
KES/TEC-English | 8e35c4b430e53e6a676442ad1f066cca046d9422 | 2022-07-29T01:53:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | KES | null | KES/TEC-English | 262 | null | transformers | 3,230 | ---
tags:
- translation
- text2text-generation
license: apache-2.0
---
# Trinidad English Creole to English Translator
This model utilises T5-base pre-trained model. It was fine tuned using a custom dataset for translation of Trinidad English Creole to English. This model will be updated periodically as more data is compiled. For more on the Caribbean English Creole checkout the library [Caribe](https://pypi.org/project/Caribe/).
___
# Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KES/TEC-English")
model = AutoModelForSeq2SeqLM.from_pretrained("KES/TEC-English")
text = "Dem men doh kno wat dey doing wid d money"
inputs = tokenizer("tec:"+text, truncation=True, return_tensors='pt')
output = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True)
correction=tokenizer.batch_decode(output, skip_special_tokens=True)
print("".join(correction)) #Correction: These men do not know what they are doing with the money.
```
___
|
deep-learning-analytics/triviaqa-t5-base | 009d299127f766de755753d12a63aca9fadbb787 | 2020-09-30T18:50:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"eng",
"dataset:triviaqa",
"transformers",
"triviaqa",
"t5-base",
"lm-head",
"question-answering",
"closed-book",
"pipeline:question-answering",
"autotrain_compatible"
] | question-answering | false | deep-learning-analytics | null | deep-learning-analytics/triviaqa-t5-base | 261 | null | transformers | 3,231 | ---
language: "eng"
tags:
- triviaqa
- t5-base
- pytorch
- lm-head
- question-answering
- closed-book
- t5
- pipeline:question-answering
datasets:
- triviaqa
widget:
- text: ["Mount Everest is found in which mountain range?","None"]
metrics:
- EM: 17
- Subset match: 24.5
---
# Model name
Closed Book Trivia-QA T5 base
## Model description
This is a T5-base model trained on No Context Trivia QA data set. The input to the model is a Trivia type question. The model is tuned to search for the answer in its memory to return it. The pretrained model used here was trained on Common Crawl (C4) data set. The model was trained for 135 epochs using a batch size of 32 and learning rate of 1e-3. Max_input_lngth is set as 25 and max_output_length is 10. Model attained an EM score of 17 and a Subset Match score of 24.5
We have written a blog post that covers the training procedure. Please find it [here](https://medium.com/@priya.dwivedi/build-a-trivia-bot-using-t5-transformer-345ff83205b6).
Test the model on Trivia Questions from the websites below:
https://www.triviaquestionss.com/easy-trivia-questions/
https://laffgaff.com/easy-trivia-questions-and-answers/
## Usage
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/triviaqa-t5-base")
model = AutoModelWithLMHead.from_pretrained("deep-learning-analytics/triviaqa-t5-base")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
text = "Who directed the movie Jaws?"
preprocess_text = text.strip().replace("\n","")
tokenized_text = tokenizer.encode(preprocess_text, return_tensors="pt").to(device)
outs = model.model.generate(
tokenized_text,
max_length=10,
num_beams=2,
early_stopping=True
)
dec = [tokenizer.decode(ids) for ids in outs]
print("Predicted Answer: ", dec)
```
|
manishiitg/resume-ner | 8e4604e4ab8ff4f08629e4436e3b97f573cc1752 | 2020-07-21T11:52:03.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | manishiitg | null | manishiitg/resume-ner | 261 | null | transformers | 3,232 | Entry not found |
person123/DialoGPT-small-petergriffin | 7f2084a08a6ccf8062ee43be2df5b0645ce5d0b4 | 2021-08-28T04:43:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | person123 | null | person123/DialoGPT-small-petergriffin | 261 | null | transformers | 3,233 | ---
tags:
- conversational
---
# Peter Griffin DialoGPT Model |
sentence-transformers/facebook-dpr-question_encoder-multiset-base | b9b2a32fb410b3b231951784863b9acd50c74e57 | 2022-06-15T23:41:03.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/facebook-dpr-question_encoder-multiset-base | 261 | null | sentence-transformers | 3,234 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/facebook-dpr-question_encoder-multiset-base
This is a port of the [DPR Model](https://github.com/facebookresearch/DPR) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/facebook-dpr-question_encoder-multiset-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/facebook-dpr-question_encoder-multiset-base')
model = AutoModel.from_pretrained('sentence-transformers/facebook-dpr-question_encoder-multiset-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/facebook-dpr-question_encoder-multiset-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 509, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [DPR Model](https://github.com/facebookresearch/DPR) |
tprincessazula/Dialog-GPT-small-harrypotter | 8803bb557987871b0163bc1fc12cc373aaff0036 | 2021-11-16T20:45:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | tprincessazula | null | tprincessazula/Dialog-GPT-small-harrypotter | 261 | 1 | transformers | 3,235 | ---
tags:
- conversational
---
# Harry Potter Dialog-GPT Model |
alk/pegasus-scitldr | 8f0f25d9dd0cd2c088e98b31ed142598d961de56 | 2022-05-20T16:03:18.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:scitldr",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | alk | null | alk/pegasus-scitldr | 261 | null | transformers | 3,236 | ---
tags:
- generated_from_trainer
datasets:
- scitldr
model-index:
- name: pegasus-scitldr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-scitldr
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the scitldr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
flooptherocket/DialogGPT-small-rick | 09b6e20b32a8691a6fd6f19da366effe2e57a232 | 2021-09-10T01:17:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | flooptherocket | null | flooptherocket/DialogGPT-small-rick | 260 | null | transformers | 3,237 | ---
tags: conversational
---
@Rick from Rick and Morty GPT-2 Conversation Model
---
|
openclimatefix/dgmr | 27e0fb0e7b8689c5f0b845fe943e2be97884b836 | 2022-06-20T08:04:07.000Z | [
"pytorch",
"transformers",
"nowcasting",
"forecasting",
"timeseries",
"remote-sensing",
"gan",
"license:mit"
] | null | false | openclimatefix | null | openclimatefix/dgmr | 260 | null | transformers | 3,238 | ---
license: mit
tags:
- nowcasting
- forecasting
- timeseries
- remote-sensing
- gan
---
# DGMR
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
raj2002jain/DialoGPT-small-Light | 6b3204cd0e4a4768d27e38d6a63360d0dc47d66c | 2021-09-09T17:19:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | raj2002jain | null | raj2002jain/DialoGPT-small-Light | 260 | null | transformers | 3,239 | ---
tags:
- conversational
---
# Light Yagami DialoGPT Model |
worms3401/DialoGPT-small-Eleonora | 290d69bbdad095c41d0cddc12a6b274f7577bac7 | 2021-09-21T12:36:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | worms3401 | null | worms3401/DialoGPT-small-Eleonora | 260 | null | transformers | 3,240 | ---
tags:
- conversational
---
# Eleonora from worms3401 DialoGPT Model |
surdan/LaBSE_ner_nerel | 162232ae2fa4dc6e44b8ac5a57ac11a528a441a1 | 2022-04-12T13:17:34.000Z | [
"pytorch",
"bert",
"token-classification",
"ru",
"en",
"transformers",
"autotrain_compatible"
] | token-classification | false | surdan | null | surdan/LaBSE_ner_nerel | 260 | 2 | transformers | 3,241 | ---
language: ["ru", "en"]
tasks:
- token-classification
---
## About model
This model based on [cointegrated/LaBSE-en-ru](https://huggingface.co/cointegrated/LaBSE-en-ru).
And trained on [surdan/nerel_short](https://huggingface.co/datasets/surdan/nerel_short) dataset
You can find more info:
- How the model was trained [Train_model.ipynb](https://huggingface.co/surdan/LaBSE_ner_nerel/blob/main/Train_model.ipynb)
- Example of usage model [Inference.ipynb](https://huggingface.co/surdan/LaBSE_ner_nerel/blob/main/Inference.ipynb) |
loubnabnl/apps-1.5B-model | e6775990f98181efde32f572be7f2d789284b337 | 2022-07-28T15:40:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | loubnabnl | null | loubnabnl/apps-1.5B-model | 260 | null | transformers | 3,242 | Entry not found |
big-kek/NeuroSkeptic | d1d9db8362b07d5e48c84517eef4686b82638c76 | 2022-07-15T20:46:09.000Z | [
"pytorch",
"opt",
"text-generation",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-generation | false | big-kek | null | big-kek/NeuroSkeptic | 260 | null | transformers | 3,243 | ---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opt-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-model
This model is a fine-tuned version of [facebook/opt-13b](https://huggingface.co/facebook/opt-13b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3965
- Accuracy: 0.5020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 72
- total_eval_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.6363 | 1.0 | 3 | 3.2090 | 0.4082 |
| 2.8168 | 2.0 | 6 | 2.4805 | 0.4874 |
| 2.3529 | 3.0 | 9 | 2.4219 | 0.4915 |
| 2.1842 | 4.0 | 12 | 2.4023 | 0.4991 |
| 2.0765 | 5.0 | 15 | 2.3965 | 0.5020 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Doxophobia/DialoGPT-medium-celeste | 11a5ea05b7fc07639314779d9e6a63557e47c4bc | 2021-08-26T18:47:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Doxophobia | null | Doxophobia/DialoGPT-medium-celeste | 259 | null | transformers | 3,244 | ---
tags:
- conversational
---
# Celestia Ludenburg DiabloGPT Model |
Martian/Neo-GPT-Title-Generation-Electric-Car | 358c5aab98e8058fd6aee12396142ebd1e8d970f | 2021-05-23T08:56:08.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers"
] | text-generation | false | Martian | null | Martian/Neo-GPT-Title-Generation-Electric-Car | 259 | null | transformers | 3,245 | ---
language:
- en
widget:
- text: Tesla range
- text: Nissan Leaf is
- text: Tesla is
- text: The best electric car
---
# Neo-GPT-Title-Generation-Electric-Car
Title generator based on Neo-GPT 125M fine-tuned on a dataset of 39k url's title. All urls are selected on the TOP 10 google on a list of Keywords about "Electric car" - "Electric car for sale".
# Pipeline example
```python
import pandas as pd
from transformers import AutoModelForMaskedLM
from transformers import GPT2Tokenizer, TrainingArguments, AutoModelForCausalLM, AutoConfig
model = AutoModelForCausalLM.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car')
tokenizer = GPT2Tokenizer.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car', bos_token='<|startoftext|>',
eos_token='<|endoftext|>', pad_token='<|pad|>')
prompt = "<|startoftext|> Electric car"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, top_k=100, min_length = 30, max_length=150, top_p=0.90, num_return_sequences=20, skip_special_tokens=True)
list_title_gen = []
for i, sample_output in enumerate(gen_tokens):
title = tokenizer.decode(sample_output, skip_special_tokens=True)
list_title_gen.append(title)
for i in list_title_gen:
try:
list_title_gen[list_title_gen.index(i)] = i.split(' | ')[0]
except:
continue
try:
list_title_gen[list_title_gen.index(i)] = i.split(' - ')[0]
except:
continue
try:
list_title_gen[list_title_gen.index(i)] = i.split(' — ')[0]
except:
continue
list_title_gen = [sub.replace('�', ' ').replace('\\r',' ').replace('\
',' ').replace('\\t', ' ').replace('\\xa0', '') for sub in list_title_gen]
list_title_gen = [sub if sub != '<|startoftext|> Electric car' else '' for sub in list_title_gen]
for i in list_title_gen:
print(i)
```
# Todo
- Improve the quality of the training sample
- Add more data
|
abbas/gpt2-horror-stories | 424e6a35d352c7d2ed1fe64d8f9a863e88224762 | 2021-05-21T11:50:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | abbas | null | abbas/gpt2-horror-stories | 259 | null | transformers | 3,246 | Entry not found |
rovai/CARRIE | 810b51a42a0c5f7a05e405574b09e4c363b4c975 | 2021-12-01T18:10:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rovai | null | rovai/CARRIE | 259 | null | transformers | 3,247 | ---
tags:
- conversational
---
# CARRIE |
Helsinki-NLP/opus-tatoeba-en-ja | f5aa4b0090e1eb2b4424e182a9ee36c358078ca5 | 2021-10-12T08:16:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ja",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-en-ja | 258 | null | transformers | 3,248 | ---
language:
- en
- ja
tags:
- translation
license: apache-2.0
---
### en-ja
* source group: English
* target group: Japanese
* OPUS readme: [eng-jpn](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-jpn/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): jpn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip)
* test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt)
* test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test.eng-jpn | 15.2 | 0.258 | 10000 | 99206 | 1.000 |
### System Info:
- hf_name: en-ja
- source_languages: eng
- target_languages: jpn
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-jpn/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ja']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Japanese', {'jpn', 'jpn_Latn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hira', 'jpn_Hang', 'jpn_Bopo', 'jpn_Hani'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-jpn
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt
- src_alpha3: eng
- tgt_alpha3: jpn
- chrF2_score: 0.258
- bleu: 15.2
- src_name: English
- tgt_name: Japanese
- train_date: 2021-04-10 00:00:00
- src_alpha2: en
- tgt_alpha2: ja
- prefer_old: False
- short_pair: en-ja
- helsinki_git_sha: 70b0a9621f054ef1d8ea81f7d55595d7f64d19ff
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-10-12-11:13 |
asahi417/lmqg-t5-small-squad-multitask | 96634990bd689cefbb71ef9855d61ebfd57de8c5 | 2022-06-01T11:13:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:asahi417/qg_squad",
"transformers",
"question generation",
"question answer generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-t5-small-squad-multitask | 258 | null | transformers | 3,249 | ---
language: en
tags:
- question generation
- question answer generation
license: cc-by-4.0
datasets:
- asahi417/qg_squad
metrics:
- bleu
- meteor
- rouge
- bertscore
- moverscore
widget:
- text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
- text: "extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress."
example_title: "Answer Extraction Example 1"
- text: "extract answers: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. <hl>"
example_title: "Answer Extraction Example 2"
pipeline_tag: text2text-generation
---
# T5 SMALL fine-tuned for English Question Generation & Answer Extraction
T5 SMALL Model fine-tuned on Japanese question generation dataset (SQuAD) with an extensive hyper-parameter search.
This model is fine-tuned on question generation & answer extraction jointly.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** t5-small
**Language:** English (en)
**Downstream-task:** Question Generation, Answer Extraction
**Training data:** SQuAD
**Eval data:** SQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-t5-small-squad-multitask'
pipe = pipeline("text2text-generation", model_path)
# Question Genration
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
# highlight an answer in the paragraph to generate question
answer = 'Etta James'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
input_text = 'generate question: {}'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': 'What is the name of the biopic that Beyonce starred in?'}]
# Answer Extraction
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.'
# highlight a sentence where the answer should be extracted
sentence = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
input_text = paragraph.replace(sentence, '{0} {1} {0}'.format(highlight_token, sentence))
input_text = 'extract answer: <hl> {} <hl>'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': 'Etta James'}]
```
## Evaluations
Evaluation on the test set of [SQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_squad).
The results are comparable with the [leaderboard](https://paperswithcode.com/sota/question-generation-on-squad11) and previous works.
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore | MoverScore |
| ------ | -------- | ------ | --------- | ---------- |
| 24.17 | 51.11 | 25.58 | 90.17 | 63.71 |
- [metric file](https://huggingface.co/asahi417/lmqg-t5-small-squad-multitask/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_squad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-t5-small-squad-multitask/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
muhardianab/DialoGPT-small-theoffice | 2a7c25a508ea6d4a4d939d3930f05c4a38b572c0 | 2021-09-12T16:52:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | muhardianab | null | muhardianab/DialoGPT-small-theoffice | 258 | null | transformers | 3,250 | ---
tags:
- conversational
---
# The Office - Pam DialoGPT Model |
pashin/DialoGPT-small-ironman-2 | 025ed7fc7fa106a4f8c0bc9d2d8a7e6596b01ba5 | 2021-10-08T16:51:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pashin | null | pashin/DialoGPT-small-ironman-2 | 258 | null | transformers | 3,251 | ---
tags:
- conversational
---
# iron man 2 |
ppn/DialoGPT-small-harrypotter | d991e690a0d14c3d224aee2b8c6e26da555a0557 | 2021-12-20T14:22:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ppn | null | ppn/DialoGPT-small-harrypotter | 258 | null | transformers | 3,252 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Helsinki-NLP/opus-mt-de-cs | 683666e07ca027d76af9ac23c0902b29084a0d18 | 2021-09-09T21:30:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"cs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-cs | 257 | null | transformers | 3,253 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-cs
* source languages: de
* target languages: cs
* OPUS readme: [de-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-cs/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cs/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cs/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.de.cs | 22.4 | 0.499 |
| news-test2008.de.cs | 20.2 | 0.487 |
| newstest2009.de.cs | 20.9 | 0.485 |
| newstest2010.de.cs | 22.7 | 0.510 |
| newstest2011.de.cs | 21.2 | 0.487 |
| newstest2012.de.cs | 20.9 | 0.479 |
| newstest2013.de.cs | 23.0 | 0.500 |
| newstest2019-decs.de.cs | 22.5 | 0.495 |
| Tatoeba.de.cs | 42.2 | 0.625 |
|
google/roberta2roberta_L-24_wikisplit | 329a94ffa643bd55266023b0c5648a9222de68a0 | 2020-12-11T21:43:19.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"arxiv:1907.12461",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/roberta2roberta_L-24_wikisplit | 257 | null | transformers | 3,254 | ---
language: en
license: apache-2.0
---
# Roberta2Roberta_L-24_wikisplit EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_cnndm/1).
The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder
and decoder and fine-tuned on sentence splitting on the [WikiSplit](https://github.com/google-research-datasets/wiki-split) dataset.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for sentence splitting, *e.g.*
**IMPORTANT**: The model was not trained on the `"` (double quotation mark) character -> so the before tokenizing the text,
it is advised to replace all `"` (double quotation marks) with two single `'` (single quotation mark).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_wikisplit")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_wikisplit")
long_sentence = """Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it and he decides to open Bob 's Burgers for customers who were planning on going to Lobsterfest."""
input_ids = tokenizer(tokenizer.bos_token + long_sentence + tokenizer.eos_token, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it. He decides to open Bob's Burgers for customers who were planning on going to Lobsterfest.
```
|
lonewanderer27/YoshinoriBot | aa72a893482c8967613a6c02d6783bc75d4e947a | 2022-02-08T15:50:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | lonewanderer27 | null | lonewanderer27/YoshinoriBot | 257 | null | transformers | 3,255 | ---
tags:
- conversational
---
# Camp Buddy - Yoshinori - DialoGPTSmall Model |
mluengas/DialogGPT-small-michaelscott | 04f1acf44125603c3b4fab84b1db98da627fdd6a | 2021-08-29T00:02:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | mluengas | null | mluengas/DialogGPT-small-michaelscott | 257 | null | transformers | 3,256 | ---
tags:
- conversational
---
# Michael Scott DialoGPT model |
monologg/koelectra-small-finetuned-naver-ner | 2d5cad7fcba17a6e8426939684aef04904acbbd6 | 2020-05-13T03:53:39.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | monologg | null | monologg/koelectra-small-finetuned-naver-ner | 257 | null | transformers | 3,257 | Entry not found |
neuralspace-reverie/indic-transformers-hi-bert | edf588344b36ff58bef70cbddf5ed4208e122ccc | 2021-05-20T01:35:03.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"hi",
"transformers",
"MaskedLM",
"Hindi",
"BERT",
"Question-Answering",
"Token Classification",
"Text Classification",
"autotrain_compatible"
] | fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-hi-bert | 257 | 1 | transformers | 3,258 | ---
language:
- hi
tags:
- MaskedLM
- Hindi
- BERT
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Hindi BERT
## Model description
This is a BERT language model pre-trained on ~3 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-hi-bert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-hi-bert')
text = "आपका स्वागत हैं"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
redbloodyknife/DialoGPT-medium-shayo | ac021b9165241406df7afd2960eebf9c5961a4e1 | 2021-12-23T12:17:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | redbloodyknife | null | redbloodyknife/DialoGPT-medium-shayo | 257 | null | transformers | 3,259 | ---
tags:
- conversational
---
#Shayo Bot by Shogun
#Ai Chatbot Testing based on GPT2 and DialoGPT-Medium by Microsoft
#shoguπ#9999 |
rovai/chatbotmedium2 | 605407014890c1123a4934a3511d14986a6cfd8e | 2021-12-01T15:36:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rovai | null | rovai/chatbotmedium2 | 257 | null | transformers | 3,260 | ---
tags:
- conversational
---
# chatbot2 |
bhadresh-savani/electra-base-discriminator-finetuned-conll03-english | 7ec8067a8a665f156fc4f97859d169842a820209 | 2022-04-08T17:21:19.000Z | [
"pytorch",
"tf",
"jax",
"electra",
"token-classification",
"en",
"dataset:conll2003",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | bhadresh-savani | null | bhadresh-savani/electra-base-discriminator-finetuned-conll03-english | 257 | null | transformers | 3,261 | ---
language:
- en
tags:
- token-classification
- pytorch
license: apache-2.0
datasets:
- conll2003
metrics:
- Accuracy, F1 Score, Precision, Recall
---
# Electra Base Discriminator conll03 English
# Results:
```
***** predict metrics *****
predict_accuracy = 0.9813
predict_f1 = 0.9137
predict_loss = 0.1251
predict_precision = 0.9098
predict_recall = 0.9177
predict_runtime = 0:00:10.11
predict_samples_per_second = 341.368
predict_steps_per_second = 42.696
``` |
MingZhong/DialogLED-base-16384 | a7eb2295b05a9127a906f930e9c215f4fa38a1db | 2022-01-05T09:15:06.000Z | [
"pytorch",
"led",
"text2text-generation",
"arxiv:2109.02492",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | MingZhong | null | MingZhong/DialogLED-base-16384 | 256 | 2 | transformers | 3,262 | [DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization](https://arxiv.org/abs/2109.02492).
## Introduction
DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a base version of DialogLED, the input length is limited to 16,384 in the pre-training phase.
## Finetuning for Downstream Tasks
Please refer to [our GitHub page](https://github.com/microsoft/DialogLM). |
castorini/mdpr-tied-pft-msmarco | 2319a582b387c09a3b6b094a220c4dcb6b0c6617 | 2021-12-11T19:24:36.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | castorini | null | castorini/mdpr-tied-pft-msmarco | 256 | null | transformers | 3,263 | Entry not found |
m3hrdadfi/wav2vec2-large-xlsr-persian-v2 | 599d7361d87b6ea3ca5d64a993e8ad8c942c48eb | 2021-07-06T10:55:39.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-large-xlsr-persian-v2 | 256 | null | transformers | 3,264 | ---
language: fa
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- label: Common Voice sample 4024
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v2/resolve/main/sample4024.flac
- label: Common Voice sample 4084
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v2/resolve/main/sample4084.flac
model-index:
- name: XLSR Wav2Vec2 Persian (Farsi) V2 by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fa
type: common_voice
args: fa
metrics:
- name: Test WER
type: wer
value: 31.92
---
# Wav2Vec2-Large-XLSR-53-Persian V2
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Persian (Farsi) using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
!pip install hazm
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import numpy as np
import hazm
import re
import string
import IPython.display as ipd
_normalizer = hazm.Normalizer()
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?",
".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„',
'ā', 'š',
# "ء",
]
# In case of farsi
chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits)
chars_to_mapping = {
'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی',
'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی",
"ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع",
"ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه",
'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش",
'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ",
# "ها": " ها", "ئ": "ی",
"a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ",
"g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ",
"m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ",
"s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ",
"y": " وای ", "z": " زد ",
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = _normalizer.normalize(text)
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
text = re.sub(" +", " ", text)
text = text.strip() + " "
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2").to(device)
dataset = load_dataset("common_voice", "fa", split="test[:1%]")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: عجم زنده کردم بدین پارسی
predicted: عجم زنده کردم بدین پارسی
---
reference: لباس هایم کی آماده خواهند شد
predicted: لباس خایم کی آماده خواهند شد
---
reference: با مهان همنشین شدم
predicted: با مهان همنشین شدم
---
reference: یکی از بهترین فیلم هایی بود که در این سال ها دیدم
predicted: یکی از بهترین فیلمهایی بود که در این سالها دیدم
---
reference: اون خیلی بد ماساژ میده
predicted: اون خیلی بد ماساژ میده
---
reference: هنوزم بزرگترین دستاورد دولت روحانی اینه که رییسی رییسجمهور نشد
predicted: هنوزم بزرگترین دستآوردار دولت روانیاینه که ریسی ریسیومرو نشد
---
reference: واسه بدنسازی آماده ای
predicted: واسه بعدنسافی آماده ای
---
reference: خدای من شماها سالمین
predicted: خدای من شما ها سالمین
---
reference: بهشون ثابت میشه که دروغ نگفتم
predicted: بهشون ثابت میشه که دروغ مگفتم
---
reference: آیا ممکن است یک پتو برای من بیاورید
predicted: سف کمیتخ لظا
---
reference: نزدیک جلو
predicted: رزیک جلو
---
reference: شایعه پراکن دربارهاش دروغ و شایعه می سازد
predicted: شایه پراکن دربارهاش دروغ و شایعه می سازد
---
reference: وقتی نیاز است که یک چهره دوستانه بیابند
predicted: وقتی نیاز است یک چهره دوستانه بیابند
---
reference: ممکنه رادیواکتیوی چیزی باشه
predicted: ممکنه به آدیوتیوی چیزی باشه
---
reference: دهنتون رو ببندید
predicted: دهن جن رو ببندید
---
reference: پاشیم بریم قند و شکر و روغنمون رو بگیریم تا تموم نشده
predicted: پاشین بریم قند و شکر و روغنمون رو بگیریم تا تموم نشده
---
reference: اما قبل از تمام کردن بحث تاریخی باید ذکری هم از ناپیکس بکنیم
predicted: اما قبل از تمام کردن بحث تاریخی باید ذکری هم از نایپکس بکنیم
---
reference: لطفا کپی امضا شده قرارداد را بازگردانید
predicted: لطفا کپی امضال شده قرار داد را باز گردانید
---
reference: خیلی هم چیز مهمی نیست
predicted: خیلی هم چیز مهمی نیست
---
reference: شایعه پراکن دربارهاش دروغ و شایعه می سازد
predicted: شایه پراکن دربارهاش دروغ و شایعه می سازد
---
```
## Evaluation
The model can be evaluated as follows on the Persian (Farsi) test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import numpy as np
import hazm
import re
import string
_normalizer = hazm.Normalizer()
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?",
".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„',
'ā', 'š',
# "ء",
]
# In case of farsi
chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits)
chars_to_mapping = {
'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی',
'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی",
"ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع",
"ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه",
'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش",
'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ",
# "ها": " ها", "ئ": "ی",
"a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ",
"g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ",
"m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ",
"s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ",
"y": " وای ", "z": " زد ",
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = _normalizer.normalize(text)
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
text = re.sub(" +", " ", text)
text = text.strip() + " "
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-v2").to(device)
dataset = load_dataset("common_voice", "fa", split="test")
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
**Test Result:**
- WER: 31.92%
## Training
The Common Voice `train`, `validation` datasets were used for training.
You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_persian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Persian--Vmlldzo1NjY1NjU?accessToken=pspukt0liicopnwe93wo1ipetqk0gzkuv8669g00wc6hcesk1fh0rfkbd0h46unk)
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Persian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb) |
Parth/boolean | 2807b08ae50bce9b69c426b756201f47709bdfd2 | 2021-06-23T03:46:27.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Parth | null | Parth/boolean | 255 | null | transformers | 3,265 | Entry not found |
asahi417/lmqg-t5-base-squad-multitask | 98fbcec0dcc5af5b8eb2e7fd8d88ec98a48c24c7 | 2022-06-01T11:13:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:asahi417/qg_squad",
"transformers",
"question generation",
"question answer generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-t5-base-squad-multitask | 255 | null | transformers | 3,266 | ---
language: en
tags:
- question generation
- question answer generation
license: cc-by-4.0
datasets:
- asahi417/qg_squad
metrics:
- bleu
- meteor
- rouge
- bertscore
- moverscore
widget:
- text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
- text: "extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress."
example_title: "Answer Extraction Example 1"
- text: "extract answers: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. <hl>"
example_title: "Answer Extraction Example 2"
pipeline_tag: text2text-generation
---
# T5 BASE fine-tuned for English Question Generation & Answer Extraction
T5 BASE Model fine-tuned on Japanese question generation dataset (SQuAD) with an extensive hyper-parameter search.
This model is fine-tuned on question generation & answer extraction jointly.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** t5-base
**Language:** English (en)
**Downstream-task:** Question Generation, Answer Extraction
**Training data:** SQuAD
**Eval data:** SQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-t5-base-squad-multitask'
pipe = pipeline("text2text-generation", model_path)
# Question Genration
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
# highlight an answer in the paragraph to generate question
answer = 'Etta James'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
input_text = 'generate question: {}'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': 'What is the name of the biopic that Beyonce starred in?'}]
# Answer Extraction
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.'
# highlight a sentence where the answer should be extracted
sentence = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
input_text = paragraph.replace(sentence, '{0} {1} {0}'.format(highlight_token, sentence))
input_text = 'extract answer: <hl> {} <hl>'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': 'Etta James'}]
```
## Evaluations
Evaluation on the test set of [SQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_squad).
The results are comparable with the [leaderboard](https://paperswithcode.com/sota/question-generation-on-squad11) and previous works.
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore | MoverScore |
| ------ | -------- | ------ | --------- | ---------- |
| 26.00 | 53.40 | 26.99 | 90.57 | 64.71 |
- [metric file](https://huggingface.co/asahi417/lmqg-t5-base-squad-multitask/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_squad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-t5-base-squad-multitask/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
coderpotter/T5-for-Adversarial-Paraphrasing | 40c7cac5f28a61cde83edce14f66ca4faedf4a80 | 2021-07-27T17:12:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | coderpotter | null | coderpotter/T5-for-Adversarial-Paraphrasing | 255 | 3 | transformers | 3,267 | This model is a paraphraser designed for the Adversarial Paraphrasing Task described and used in this paper: https://aclanthology.org/2021.acl-long.552/.
Please refer to `nap_generation.py` on the github repository for ways to better utilize this model using concepts of top-k sampling and top-p sampling. The demo on huggingface will output only one sentence which will most likely be the same as the input sentence since the model is supposed to output using beam search and sampling.
Github repository: https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt.git
Please cite the following if you use this model:
```bib
@inproceedings{nighojkar-licato-2021-improving,
title = "Improving Paraphrase Detection with the Adversarial Paraphrasing Task",
author = "Nighojkar, Animesh and
Licato, John",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.552",
pages = "7106--7116",
abstract = "If two sentences have the same meaning, it should follow that they are equivalent in their inferential properties, i.e., each sentence should textually entail the other. However, many paraphrase datasets currently in widespread use rely on a sense of paraphrase based on word overlap and syntax. Can we teach them instead to identify paraphrases in a way that draws on the inferential properties of the sentences, and is not over-reliant on lexical and syntactic similarities of a sentence pair? We apply the adversarial paradigm to this question, and introduce a new adversarial method of dataset creation for paraphrase identification: the Adversarial Paraphrasing Task (APT), which asks participants to generate semantically equivalent (in the sense of mutually implicative) but lexically and syntactically disparate paraphrases. These sentence pairs can then be used both to test paraphrase identification models (which get barely random accuracy) and then improve their performance. To accelerate dataset generation, we explore automation of APT using T5, and show that the resulting dataset also improves accuracy. We discuss implications for paraphrase detection and release our dataset in the hope of making paraphrase detection models better able to detect sentence-level meaning equivalence.",
}
``` |
funnel-transformer/medium | 1d0927808deebdda31e56ddf4d1cfb6d665fda33 | 2020-12-11T21:40:38.000Z | [
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | funnel-transformer | null | funnel-transformer/medium | 255 | null | transformers | 3,268 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer medium model (B6-3x2-3x2 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
model = FunneModel.from_pretrained("funnel-transformer/medium")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
model = TFFunnelModel.from_pretrained("funnel-transformer/medium")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
katanaml/layoutlmv2-finetuned-cord | 0a6e15510dd0b0af0ab2d443d93e08e67bbc20f4 | 2022-03-13T22:01:58.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"dataset:katanaml/cord",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | katanaml | null | katanaml/layoutlmv2-finetuned-cord | 255 | 1 | transformers | 3,269 | ---
license: cc-by-nc-sa-4.0
datasets:
- katanaml/cord
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-cord
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on CORD dataset.
## Model description
Model implementation code [Sparrow](https://github.com/katanaml/sparrow)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
IDEA-CCNL/Randeng-BART-139M-SUMMARY | abd454ff5b874b107a3e2640a9b42afda0384335 | 2022-04-27T02:37:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"zh",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | IDEA-CCNL | null | IDEA-CCNL/Randeng-BART-139M-SUMMARY | 255 | 2 | transformers | 3,270 | ---
language:
- zh
license: apache-2.0
inference: true
widget:
- text: 'summary: 在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!'
---
# Randeng-BART-139M-SUMMARY model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
The 139M million parameter Randeng-BART large model, using 180G Chinese data, 8 A100(40G) training for 3 days,which is a standard transformer structure. Finetune on summary downstream task.
## Usage
```python
from transformers import BartForConditionalGeneration, AutoTokenizer, Text2TextGenerationPipeline
import torch
tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Randeng-BART-139M-SUMMARY')
model=BartForConditionalGeneration.from_pretrained('IDEA-CCNL/Randeng-BART-139M-SUMMARY')
text = 'summary:在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!'
text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
print(text2text_generator(text, max_length=50, do_sample=False))
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
agne/jobBERT-de | e6dea171601304865a85e7cdc1f4d23bd2ed8cec | 2022-06-03T13:53:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"de",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | agne | null | agne/jobBERT-de | 255 | null | transformers | 3,271 | ---
language: de
license: cc-by-nc-sa-4.0
---
## jobBERT-de
This is a domain-adapted transformer-based language model for German-speaking job advertisements.
Is is based on [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) and adapted to the domain of job advertisements trough continued in-domain pretraining on 4 million German-speaking job ads from Switzerland 1990-2020 (5.9 GB data). Empty spots in the vocabulary of the base model were filled with most frequent domain-specific words, subtokens and abbreviations.
### Overview
**Architecture:** BERT base <br>
**Language:** German <br>
**Domain:** Job advertisements <br>
**See also:** [agne/jobGBERT](https://huggingface.co/agne/jobGBERT)
### License
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (cc-by-nc-sa-4.0)
Please use the following citation when using our model:
```bibtex
@inproceedings{
title = "Evaluation of Transfer Learning and Domain Adaptation for Analyzing German-Speaking Job Advertisements",
author = "Gnehm, Ann-Sophie and
Bühlmann, Eva and
Clematide, Simon",
booktitle = "Proceedings of the 13th Language Resources and Evaluation Conference",
month = june,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
}
```
### Intended usage and limitations
You can use the model for masked language modeling, but it's intended to be fine-tuned on a downstream task.
The model is trained on German-Speaking job ads from Switzerland. It inherits potential bias of its base model, and may contain biases and stereotypes common in job advertisements.
### About us
Ann-Sophie Gnehm: `gnehm [at] soziologie.uzh.ch` <br>
Eva Bühlmann: `bühlmann [at] soziologie.uzh.ch` <br>
Simon Clematide: `simon.clematide [at] cl.uzh.ch` <br>
The [Swiss Job Market Monitor](https://www.stellenmarktmonitor.uzh.ch/en.html) aims at systematically expanding scientific knowledge about the job market and improving labour market transparency by informing the general public about current developments on the job market.
**Get in touch:** [Mail](mailto:[email protected]) [Website](https://www.stellenmarktmonitor.uzh.ch/en.html) [Zenodo](https://doi.org/10.5281/zenodo.6497853) [SWISSUbase](https://www.swissubase.ch/de/catalogue/studies/11998/18157/overview)
|
pszemraj/grammar-synthesis-large | 6daffc7582df68c3ca96e882c7191be2051c84aa | 2022-07-22T08:37:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:jfleg",
"arxiv:2107.06751",
"transformers",
"grammar",
"spelling",
"punctuation",
"error-correction",
"grammar synthesis",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | pszemraj | null | pszemraj/grammar-synthesis-large | 255 | 1 | transformers | 3,272 | ---
license: cc-by-nc-sa-4.0
tags:
- grammar
- spelling
- punctuation
- error-correction
- grammar synthesis
datasets:
- jfleg
widget:
- text: "i can has cheezburger"
example_title: "cheezburger"
- text: "There car broke down so their hitching a ride to they're class."
example_title: "compound-1"
- text: "so em if we have an now so with fito ringina know how to estimate the tren given the ereafte mylite trend we can also em an estimate is nod s
i again tort watfettering an we have estimated the trend an
called wot to be called sthat of exty right now we can and look at
wy this should not hare a trend i becan we just remove the trend an and we can we now estimate
tesees ona effect of them exty"
example_title: "Transcribed Audio Example 2"
- text: "My coworker said he used a financial planner to help choose his stocks so he wouldn't loose money."
example_title: "incorrect word choice (context)"
- text: "good so hve on an tadley i'm not able to make it to the exla session on monday this week e which is why i am e recording pre recording
an this excelleision and so to day i want e to talk about two things and first of all em i wont em wene give a summary er about
ta ohow to remove trents in these nalitives from time series"
example_title: "lowercased audio transcription output"
- text: "Frustrated, the chairs took me forever to set up."
example_title: "dangling modifier"
- text: "I would like a peice of pie."
example_title: "miss-spelling"
- text: "Which part of Zurich was you going to go hiking in when we were there for the first time together? ! ?"
example_title: "chatbot on Zurich"
parameters:
max_length: 128
min_length: 4
num_beams: 4
repetition_penalty: 1.21
length_penalty: 1
early_stopping: True
---
# grammar-synthesis-large - beta
A fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) for grammar correction on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset.
usage in Python (after `pip install transformers`):
```
from transformers import pipeline
corrector = pipeline(
'text2text-generation',
'pszemraj/grammar-synthesis-large',
)
raw_text = 'i can has cheezburger'
results = corrector(raw_text)
print(results)
```
give it a spin in Colab at [this notebook](https://colab.research.google.com/gist/pszemraj/9b810e38a4d3bc766834df921818d782/scratchpad.ipynb)
## Model description
The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text **that could have a lot of mistakes** with the important qualifier of **it does not semantically change text/information that IS grammatically correct.**
Compare some of the heavier-error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
## Limitations
- dataset: `cc-by-nc-sa-4.0`
- model: `apache-2.0`
- this is **still a work-in-progress** and while probably useful for "single-shot grammar correction" in a lot of cases, **give the outputs a glance for correctness ok?**
## Use Cases
Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases:
1. Correcting highly error-prone LM outputs. Some examples would be audio transcription (ASR) (this is literally some of the examples) or something like handwriting OCR.
- To be investigated further, depending on what model/system is used it _might_ be worth it to apply this after OCR on typed characters.
2. Correcting/infilling text generated by text generation models to be cohesive/remove obvious errors that break the conversation immersion. I use this on the outputs of [this OPT 2.7B chatbot-esque model of myself](https://huggingface.co/pszemraj/opt-peter-2.7B).
> An example of this model running on CPU with beam search:
```
original response:
ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to
synthesizing took 306.12 seconds
Final response in 1294.857 s:
I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)
```
_Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting [to avoid coming off as passive aggressive](https://www.npr.org/2020/09/05/909969004/before-texting-your-kid-make-sure-to-double-check-your-punctuation)_
3. Somewhat related to #2 above, fixing/correcting so-called [tortured-phrases](https://arxiv.org/abs/2107.06751) that are dead giveaways text was generated by a language model. _Note that _SOME_ of these are not fixed, especially as they venture into domain-specific terminology (i.e. irregular timberland instead of Random Forest)._
## Training and evaluation data
More information needed 😉
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 1
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
avinashshrangee/DialoGPT-small-Ricky | a0480eb06a21d5615956464cf233148a2749db8f | 2022-02-17T09:14:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | avinashshrangee | null | avinashshrangee/DialoGPT-small-Ricky | 254 | null | transformers | 3,273 | ---
tags:
- conversational
---
# rickbot Dialo-GPT |
dbmdz/t5-base-conll03-english | 60f2a42bb3259103b324383c11f694520d07129c | 2022-01-12T18:41:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:conll2003",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | dbmdz | null | dbmdz/t5-base-conll03-english | 254 | 1 | transformers | 3,274 | ---
language: en
license: mit
datasets:
- conll2003
widget:
- text: My name is Clara Clever and I live in Berkeley , California .
---
# T5 Base Model for Named Entity Recognition (NER, CoNLL-2003)
In this repository, we open source a T5 Base model, that was fine-tuned on the official CoNLL-2003 NER dataset.
We use the great [TANL library](https://github.com/amazon-research/tanl) from Amazon for fine-tuning the model.
The exact approach of fine-tuning is presented in the "TANL: Structured Prediction as Translation between Augmented Natural Languages"
paper from Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang and Stefano Soatto.
# Fine-Tuning
We use the same hyper-parameter settings as used in the official implementation with one minor change. Instead of using 8 V100 GPUs, we train the model
on one V100 GPU and used gradient accumulation. The slighly modified configuration file (`config.ini`) then looks like:
```ini
[conll03]
datasets = conll03
model_name_or_path = t5-base
num_train_epochs = 10
max_seq_length = 256
max_seq_length_eval = 512
per_device_train_batch_size = 4
per_device_eval_batch_size = 4
do_train = True
do_eval = True
do_predict = True
gradient_accumulation_steps = 8
```
It took around 2 hours to fine-tune that model on the 14,041 training sentences of CoNLL-2003 dataset.
# Evaluation
On the development set, the following evaluation results could be achieved:
```json
{
"entity_precision": 0.9536446086664427,
"entity_recall": 0.9555705149781218,
"entity_f1": 0.9546065904505716,
"entity_precision_no_type": 0.9773261672824992,
"entity_recall_no_type": 0.9792998990238977,
"entity_f1_no_type": 0.9783120376597176
}
```
The evaluation results on the test set looks like:
```json
{
"entity_precision": 0.912182296231376,
"entity_recall": 0.9213881019830028,
"entity_f1": 0.9167620893155995,
"entity_precision_no_type": 0.953900087642419,
"entity_recall_no_type": 0.9635269121813032,
"entity_f1_no_type": 0.9586893332158901
}
```
To summarize: On the development set, 95.46% F1-Score and 91.68% on test set were achieved with this model. The paper reported a F1-Score of 91.7%.
# License
The models is licensed under [MIT](https://choosealicense.com/licenses/mit/).
# Acknowledgments
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
milayue/neosh-bot1 | 38bce478b4f6379191b8f6480e03d6c41755ccd6 | 2021-08-31T10:43:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | milayue | null | milayue/neosh-bot1 | 254 | 1 | transformers | 3,275 | ---
tags:
- conversational
---
# Neosh Bot1
This is a simplified version. Hopefully will train a more complex model in the future. |
st1992/bert-restore-punctuation | 0ddfc9ac0e6ecb7c72cbd2e3f18d69675de2d125 | 2021-11-20T08:14:39.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:yelp_polarity",
"transformers",
"punctuation",
"license:mit",
"autotrain_compatible"
] | token-classification | false | st1992 | null | st1992/bert-restore-punctuation | 254 | null | transformers | 3,276 | ---
language:
- en
tags:
- punctuation
license: mit
datasets:
- yelp_polarity
metrics:
- f1
---
# ✨ bert-restore-punctuation
[]()
This a bert-base-uncased model finetuned for punctuation restoration on [Yelp Reviews](https://www.tensorflow.org/datasets/catalog/yelp_polarity_reviews).
The model predicts the punctuation and upper-casing of plain, lower-cased text. An example use case can be ASR output. Or other cases when text has lost punctuation.
This model is intended for direct use as a punctuation restoration model for the general English language. Alternatively, you can use this for further fine-tuning on domain-specific texts for punctuation restoration tasks.
Model restores the following punctuations -- **[! ? . , - : ; ' ]**
The model also restores the upper-casing of words.
-----------------------------------------------
## 🚋 Usage
**Below is a quick way to get up and running with the model.**
1. First, install the package.
```bash
pip install rpunct
```
2. Sample python code.
```python
from rpunct import RestorePuncts
# The default language is 'english'
rpunct = RestorePuncts()
rpunct.punctuate("""in 2018 cornell researchers built a high-powered detector that in combination with an algorithm-driven process called ptychography set a world record
by tripling the resolution of a state-of-the-art electron microscope as successful as it was that approach had a weakness it only worked with ultrathin samples that were
a few atoms thick anything thicker would cause the electrons to scatter in ways that could not be disentangled now a team again led by david muller the samuel b eckert
professor of engineering has bested its own record by a factor of two with an electron microscope pixel array detector empad that incorporates even more sophisticated
3d reconstruction algorithms the resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves""")
# Outputs the following:
# In 2018, Cornell researchers built a high-powered detector that, in combination with an algorithm-driven process called Ptychography, set a world record by tripling the
# resolution of a state-of-the-art electron microscope. As successful as it was, that approach had a weakness. It only worked with ultrathin samples that were a few atoms
# thick. Anything thicker would cause the electrons to scatter in ways that could not be disentangled. Now, a team again led by David Muller, the Samuel B.
# Eckert Professor of Engineering, has bested its own record by a factor of two with an Electron microscope pixel array detector empad that incorporates even more
# sophisticated 3d reconstruction algorithms. The resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves.
```
**This model works on arbitrarily large text in English language and uses GPU if available.**
-----------------------------------------------
## 📡 Training data
Here is the number of product reviews we used for finetuning the model:
| Language | Number of text samples|
| -------- | ----------------- |
| English | 560,000 |
We found the best convergence around _**3 epochs**_, which is what presented here and available via a download.
-----------------------------------------------
## 🎯 Accuracy
The fine-tuned model obtained the following accuracy on 45,990 held-out text samples:
| Accuracy | Overall F1 | Eval Support |
| -------- | ---------------------- | ------------------- |
| 91% | 90% | 45,990
Below is a breakdown of the performance of the model by each label:
| label | precision | recall | f1-score | support|
| --------- | -------------|-------- | ----------|--------|
| **!** | 0.45 | 0.17 | 0.24 | 424
| **!+Upper** | 0.43 | 0.34 | 0.38 | 98
| **'** | 0.60 | 0.27 | 0.37 | 11
| **,** | 0.59 | 0.51 | 0.55 | 1522
| **,+Upper** | 0.52 | 0.50 | 0.51 | 239
| **-** | 0.00 | 0.00 | 0.00 | 18
| **.** | 0.69 | 0.84 | 0.75 | 2488
| **.+Upper** | 0.65 | 0.52 | 0.57 | 274
| **:** | 0.52 | 0.31 | 0.39 | 39
| **:+Upper** | 0.36 | 0.62 | 0.45 | 16
| **;** | 0.00 | 0.00 | 0.00 | 17
| **?** | 0.54 | 0.48 | 0.51 | 46
| **?+Upper** | 0.40 | 0.50 | 0.44 | 4
| **none** | 0.96 | 0.96 | 0.96 |35352
| **Upper** | 0.84 | 0.82 | 0.83 | 5442
-----------------------------------------------
## ☕ Contact
Contact [Daulet Nurmanbetov]([email protected]) for questions, feedback and/or requests for similar models.
----------------------------------------------- |
microsoft/tapex-base-finetuned-wikisql | a78c90b03c8470f275de848c31f62f03e8807285 | 2022-07-14T10:11:17.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:wikisql",
"arxiv:2107.07653",
"transformers",
"tapex",
"table-question-answering",
"license:mit",
"autotrain_compatible"
] | table-question-answering | false | microsoft | null | microsoft/tapex-base-finetuned-wikisql | 254 | null | transformers | 3,277 | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikisql
license: mit
---
# TAPEX (base-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [WikiSQL](https://huggingface.co/datasets/wikisql) dataset.
## Intended Uses
You can use the model for table question answering on relatively simple questions. Some **solveable** questions are shown below (corresponding tables now shown):
| Question | Answer |
|:---: |:---:|
| tell me what the notes are for south australia | no slogan on current series |
| what position does the player who played for butler cc (ks) play? | guard-forward |
| how many schools did player number 3 play at? | 1.0 |
| how many winning drivers in the kraco twin 125 (r2) race were there? | 1.0 |
| for the episode(s) aired in the u.s. on 4 april 2008, what were the names? | "bust a move" part one, "bust a move" part two |
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base-finetuned-wikisql")
model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-base-finetuned-wikisql")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008.0']
```
### How to Eval
Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` |
hfl/chinese-electra-180g-small-ex-generator | 2bd4389440cea45aaca2dfc1df08a96d04df7090 | 2021-03-03T01:25:06.000Z | [
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | false | hfl | null | hfl/chinese-electra-180g-small-ex-generator | 253 | 1 | transformers | 3,278 | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
mrm8488/mbart-large-finetuned-opus-es-en-translation | f020357bbf6d6ace40fb1cd5cb793fd77f00d243 | 2021-01-23T07:54:59.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"es",
"en",
"dataset:opus100",
"transformers",
"translation",
"autotrain_compatible"
] | translation | false | mrm8488 | null | mrm8488/mbart-large-finetuned-opus-es-en-translation | 253 | 1 | transformers | 3,279 | ---
tags:
- translation
language:
- es
- en
datasets:
- opus100
---
### mbart-large-es-en
This is mbart-large-cc25, finetuned on opus100 for Spanish to English translation.
It scores BLEU **28.25** on validation dataset
It scores BLEU **28.28** on test
dataset |
sagorsarker/codeswitch-spaeng-sentiment-analysis-lince | 99158c9fa3690ed9d011c3853cbb78f2b4dc96ec | 2021-05-19T01:22:56.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"es",
"en",
"dataset:lince",
"transformers",
"codeswitching",
"spanish-english",
"sentiment-analysis",
"license:mit"
] | text-classification | false | sagorsarker | null | sagorsarker/codeswitch-spaeng-sentiment-analysis-lince | 253 | null | transformers | 3,280 | ---
language:
- es
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- spanish-english
- sentiment-analysis
---
# codeswitch-spaeng-sentiment-analysis-lince
This is a pretrained model for **Sentiment Analysis** of `spanish-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Sentiment Analysis of Spanish-English Code-Mixed Data
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-spaeng-sentiment-analysis-lince")
model = AutoModelForSequenceClassification.from_pretrained("sagorsarker/codeswitch-spaeng-sentiment-analysis-lince")
nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
sentence = "El perro le ladraba a La Gatita .. .. lol #teamlagatita en las playas de Key Biscayne este Memorial day"
nlp(sentence)
```
* **Method-2**
```py
from codeswitch.codeswitch import SentimentAnalysis
sa = SentimentAnalysis('spa-eng')
sentence = "El perro le ladraba a La Gatita .. .. lol #teamlagatita en las playas de Key Biscayne este Memorial day"
result = sa.analyze(sentence)
print(result)
```
|
toyfreak/DialoGPT-small-shy | 13b531022e2cc37e471e65e02c28fcf793ca9825 | 2022-01-14T06:44:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | toyfreak | null | toyfreak/DialoGPT-small-shy | 253 | null | transformers | 3,281 | ---
tags:
- conversational
---
# Shy DialoGPT Model |
victordata/DialoGPT-small-Rick | c7b0c62ec6092bcd8a7221c5e7a8b6cb63c2455c | 2021-08-26T19:40:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | victordata | null | victordata/DialoGPT-small-Rick | 253 | null | transformers | 3,282 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
google/t5-efficient-tiny | 684deb5ddf96e093cd76910420e79e7b9af1928f | 2022-02-15T10:49:40.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-tiny | 252 | 2 | transformers | 3,283 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-TINY (Deep-Narrow version)
T5-Efficient-TINY is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-tiny** - is of model type **Tiny** with no variations.
It has **15.58** million parameters and thus requires *ca.* **62.32 MB** of memory in full precision (*fp32*)
or **31.16 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
hiiamsid/sentence_similarity_hindi | 498c752284c8deb5137b1096d5eeccf7520166aa | 2022-01-03T11:25:33.000Z | [
"pytorch",
"bert",
"feature-extraction",
"hi",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | hiiamsid | null | hiiamsid/sentence_similarity_hindi | 252 | 2 | sentence-transformers | 3,284 | ---
pipeline_tag: sentence-similarity
language:
- hi
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hiiamsid/sentence_similarity_hindi
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hiiamsid/sentence_similarity_hindi')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
```
cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
0.825825032,0.8227195932,0.8127990959,0.8214681478,0.8111641963,0.8194870279,0.8096042841,0.8061808483
```
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 341 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 137,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Model: [setu4993/LaBSE]
(https://huggingface.co/setu4993/LaBSE)
- Sentence Transformers [Semantic Textual Similarity]
(https://www.sbert.net/examples/training/sts/README.html)
|
ynie/roberta-large_conv_contradiction_detector_v0 | 8443d7379d0d258152c8ef3dd7837a261edf45ce | 2021-05-20T23:20:34.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | ynie | null | ynie/roberta-large_conv_contradiction_detector_v0 | 252 | null | transformers | 3,285 | Entry not found |
LeBenchmark/wav2vec2-FR-3K-base | ad47d8dd642652eb60e44f4c9ced14fec1491ed2 | 2021-11-30T04:22:46.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | LeBenchmark | null | LeBenchmark/wav2vec2-FR-3K-base | 251 | null | transformers | 3,286 | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Task Agnostic and Task Specific Self-Supervised Learning from Speech with LeBenchmark](https://openreview.net/pdf?id=TSvj5dmuSd)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (1K), medium (3K), and large (7K) corpus. A larger one should come later. In short:
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@article{Evain2021LeBenchmarkAR,
title={LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech},
author={Sol{\`e}ne Evain and Ha Nguyen and Hang Le and Marcely Zanon Boito and Salima Mdhaffar and Sina Alisamir and Ziyi Tong and N. Tomashenko and Marco Dinarelli and Titouan Parcollet and A. Allauzen and Y. Est{\`e}ve and B. Lecouteux and F. Portet and S. Rossato and F. Ringeval and D. Schwab and L. Besacier},
journal={ArXiv},
year={2021},
volume={abs/2104.11462}
}
```
|
algoprog/mimics-query-bart-base | 5e4498da17bb6097e188d6170e4b9f2147feb746 | 2022-02-24T01:27:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | algoprog | null | algoprog/mimics-query-bart-base | 251 | null | transformers | 3,287 | Entry not found |
emrecan/bert-base-turkish-cased-mean-nli-stsb-tr | c4d66371214a20c0c91a39c83351ddc24f398800 | 2022-01-24T23:55:40.000Z | [
"pytorch",
"bert",
"feature-extraction",
"tr",
"dataset:nli_tr",
"dataset:emrecan/stsb-mt-turkish",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | emrecan | null | emrecan/bert-base-turkish-cased-mean-nli-stsb-tr | 251 | 2 | sentence-transformers | 3,288 | ---
language:
- tr
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- nli_tr
- emrecan/stsb-mt-turkish
widget:
source_sentence: "Bu çok mutlu bir kişi"
sentences:
- "Bu mutlu bir köpek"
- "Bu sevincinden havalara uçan bir insan"
- "Çok kar yağıyor"
---
# emrecan/bert-base-turkish-cased-mean-nli-stsb-tr
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on Turkish machine translated versions of [NLI](https://huggingface.co/datasets/nli_tr) and [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) datasets, using example [training scripts]( https://github.com/UKPLab/sentence-transformers/tree/master/examples/training) from sentence-transformers GitHub repository.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Bu örnek bir cümle", "Her cümle vektöre çevriliyor"]
model = SentenceTransformer('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Bu örnek bir cümle", "Her cümle vektöre çevriliyor"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr')
model = AutoModel.from_pretrained('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
Evaluation results on test and development sets are given below:
| Split | Epoch | cosine_pearson | cosine_spearman | euclidean_pearson | euclidean_spearman | manhattan_pearson | manhattan_spearman | dot_pearson | dot_spearman |
|------------|-------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------|--------------|
| test | - | 0.834 | 0.830 | 0.820 | 0.819 | 0.819 | 0.818 | 0.799 | 0.789 |
| validation | 1 | 0.850 | 0.848 | 0.831 | 0.835 | 0.83 | 0.83 | 0.80 | 0.806 |
| validation | 2 | 0.857 | 0.857 | 0.844 | 0.848 | 0.844 | 0.848 | 0.813 | 0.810 |
| validation | 3 | 0.860 | 0.859 | 0.846 | 0.851 | 0.846 | 0.850 | 0.825 | 0.822 |
| validation | 4 | 0.859 | 0.860 | 0.846 | 0.851 | 0.846 | 0.851 | 0.825 | 0.823 |
## Training
Training scripts [`training_nli_v2.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/nli/training_nli_v2.py) and [`training_stsbenchmark_continue_training.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/sts/training_stsbenchmark_continue_training.py) were used to train the model.
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 200,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
facebook/xglm-2.9B | 52f590a6a4a9125d105be2cfadfa15cbbf737ee1 | 2022-02-15T01:31:43.000Z | [
"pytorch",
"xglm",
"text-generation",
"arxiv:2112.10668",
"transformers",
"license:mit"
] | text-generation | false | facebook | null | facebook/xglm-2.9B | 251 | null | transformers | 3,289 | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
inference: false
---
# XGLM-2.9B
XGLM-2.9B is a multilingual autoregressive language model (with 2.9 billion parameters) trained on a balanced corpus of a diverse set of languages totaling 500 billion sub-tokens. It was introduced in the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin\*, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li\* (\*Equal Contribution). The original implementation was released in [this repository](https://github.com/pytorch/fairseq/tree/main/examples/xglm).
## Training Data Statistics
The training data statistics of XGLM-2.9B is shown in the table below.
| ISO-639-1| family | name | # tokens | ratio | ratio w/ lowRes upsampling |
|:--------|:-----------------|:------------------------|-------------:|------------:|-------------:|
| en | Indo-European | English | 803526736124 | 0.489906 | 0.3259 |
| ru | Indo-European | Russian | 147791898098 | 0.0901079 | 0.0602 |
| zh | Sino-Tibetan | Chinese | 132770494630 | 0.0809494 | 0.0483 |
| de | Indo-European | German | 89223707856 | 0.0543992 | 0.0363 |
| es | Indo-European | Spanish | 87303083105 | 0.0532282 | 0.0353 |
| fr | Indo-European | French | 77419639775 | 0.0472023 | 0.0313 |
| ja | Japonic | Japanese | 66054364513 | 0.040273 | 0.0269 |
| it | Indo-European | Italian | 41930465338 | 0.0255648 | 0.0171 |
| pt | Indo-European | Portuguese | 36586032444 | 0.0223063 | 0.0297 |
| el | Indo-European | Greek (modern) | 28762166159 | 0.0175361 | 0.0233 |
| ko | Koreanic | Korean | 20002244535 | 0.0121953 | 0.0811 |
| fi | Uralic | Finnish | 16804309722 | 0.0102455 | 0.0681 |
| id | Austronesian | Indonesian | 15423541953 | 0.00940365 | 0.0125 |
| tr | Turkic | Turkish | 12413166065 | 0.00756824 | 0.0101 |
| ar | Afro-Asiatic | Arabic | 12248607345 | 0.00746791 | 0.0099 |
| vi | Austroasiatic | Vietnamese | 11199121869 | 0.00682804 | 0.0091 |
| th | Tai–Kadai | Thai | 10842172807 | 0.00661041 | 0.044 |
| bg | Indo-European | Bulgarian | 9703797869 | 0.00591635 | 0.0393 |
| ca | Indo-European | Catalan | 7075834775 | 0.0043141 | 0.0287 |
| hi | Indo-European | Hindi | 3448390110 | 0.00210246 | 0.014 |
| et | Uralic | Estonian | 3286873851 | 0.00200399 | 0.0133 |
| bn | Indo-European | Bengali, Bangla | 1627447450 | 0.000992245 | 0.0066 |
| ta | Dravidian | Tamil | 1476973397 | 0.000900502 | 0.006 |
| ur | Indo-European | Urdu | 1351891969 | 0.000824241 | 0.0055 |
| sw | Niger–Congo | Swahili | 907516139 | 0.000553307 | 0.0037 |
| te | Dravidian | Telugu | 689316485 | 0.000420272 | 0.0028 |
| eu | Language isolate | Basque | 105304423 | 6.42035e-05 | 0.0043 |
| my | Sino-Tibetan | Burmese | 101358331 | 6.17976e-05 | 0.003 |
| ht | Creole | Haitian, Haitian Creole | 86584697 | 5.27902e-05 | 0.0035 |
| qu | Quechuan | Quechua | 3236108 | 1.97304e-06 | 0.0001 |
## Model card
For intended usage of the model, please refer to the [model card](https://github.com/pytorch/fairseq/blob/main/examples/xglm/model_card.md) released by the XGLM-2.9B development team.
## Example (COPA)
The following snippet shows how to evaluate our models (GPT-3 style, zero-shot) on the Choice of Plausible Alternatives (COPA) task, using examples in English, Chinese and Hindi.
```python
import torch
import torch.nn.functional as F
from transformers import XGLMTokenizer, XGLMForCausalLM
tokenizer = XGLMTokenizer.from_pretrained("facebook/xglm-2.9B")
model = XGLMForCausalLM.from_pretrained("facebook/xglm-2.9B")
data_samples = {
'en': [
{
"premise": "I wanted to conserve energy.",
"choice1": "I swept the floor in the unoccupied room.",
"choice2": "I shut off the light in the unoccupied room.",
"question": "effect",
"label": "1"
},
{
"premise": "The flame on the candle went out.",
"choice1": "I blew on the wick.",
"choice2": "I put a match to the wick.",
"question": "cause",
"label": "0"
}
],
'zh': [
{
"premise": "我想节约能源。",
"choice1": "我在空着的房间里扫了地板。",
"choice2": "我把空房间里的灯关了。",
"question": "effect",
"label": "1"
},
{
"premise": "蜡烛上的火焰熄灭了。",
"choice1": "我吹灭了灯芯。",
"choice2": "我把一根火柴放在灯芯上。",
"question": "cause",
"label": "0"
}
],
'hi': [
{
"premise": "M te vle konsève enèji.",
"choice1": "Mwen te fin baleye chanm lib la.",
"choice2": "Mwen te femen limyè nan chanm lib la.",
"question": "effect",
"label": "1"
},
{
"premise": "Flam bouji a te etenn.",
"choice1": "Mwen te soufle bouji a.",
"choice2": "Mwen te limen mèch bouji a.",
"question": "cause",
"label": "0"
}
]
}
def get_logprobs(prompt):
inputs = tokenizer(prompt, return_tensors="pt")
input_ids, output_ids = inputs["input_ids"], inputs["input_ids"][:, 1:]
outputs = model(**inputs, labels=input_ids)
logits = outputs.logits
logprobs = torch.gather(F.log_softmax(logits, dim=2), 2, output_ids.unsqueeze(2))
return logprobs
# Zero-shot evaluation for the Choice of Plausible Alternatives (COPA) task.
# A return value of 0 indicates that the first alternative is more plausible,
# while 1 indicates that the second alternative is more plausible.
def COPA_eval(prompt, alternative1, alternative2):
lprob1 = get_logprobs(prompt + "\n" + alternative1).sum()
lprob2 = get_logprobs(prompt + "\n" + alternative2).sum()
return 0 if lprob1 > lprob2 else 1
for lang in data_samples_long:
for idx, example in enumerate(data_samples_long[lang]):
predict = COPA_eval(example["premise"], example["choice1"], example["choice2"])
print(f'{lang}-{idx}', predict, example['label'])
# en-0 1 1
# en-1 0 0
# zh-0 1 1
# zh-1 0 0
# hi-0 1 1
# hi-1 0 0
``` |
maniacGhost24/MichaelScott-bot-push-small | 947e0eb899f24c4dc7ae9f68ce62b30d028422ab | 2021-09-24T06:43:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | maniacGhost24 | null | maniacGhost24/MichaelScott-bot-push-small | 251 | null | transformers | 3,290 | ---
tags:
- conversational
---
# Michael Scott DialoGPT Bot. |
Helsinki-NLP/opus-mt-mg-en | 824e38004efa8f17daf23192d52142d45d68ba68 | 2021-09-10T13:57:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"mg",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-mg-en | 250 | null | transformers | 3,291 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-mg-en
* source languages: mg
* target languages: en
* OPUS readme: [mg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.mg.en | 27.6 | 0.522 |
| Tatoeba.mg.en | 50.2 | 0.607 |
|
PereLluis13/wav2vec2-xls-r-1b-ca-lm | 72fa8b534c71c2c58c36580ca7160e0993fd3867 | 2022-03-29T08:41:46.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ca",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:collectivat/tv3_parla",
"dataset:projecte-aina/parlament_parla",
"transformers",
"collectivat/tv3_parla",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"projecte-aina/parlament_parla",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | PereLluis13 | null | PereLluis13/wav2vec2-xls-r-1b-ca-lm | 250 | 1 | transformers | 3,292 | ---
language:
- ca
license: apache-2.0
tags:
- automatic-speech-recognition
- collectivat/tv3_parla
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- projecte-aina/parlament_parla
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
- collectivat/tv3_parla
- projecte-aina/parlament_parla
model-index:
- name: wav2vec2-xls-r-1b-ca-lm
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_8_0 ca
type: mozilla-foundation/common_voice_8_0
args: ca
metrics:
- name: Test WER
type: wer
value: 6.0722669958130644
- name: Test CER
type: cer
value: 1.9180697705166526
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: projecte-aina/parlament_parla ca
type: projecte-aina/parlament_parla
args: clean
metrics:
- name: Test WER
type: wer
value: 5.139820371024042
- name: Test CER
type: cer
value: 2.0163620128164722
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: collectivat/tv3_parla ca
type: collectivat/tv3_parla
args: ca
metrics:
- name: Test WER
type: wer
value: 11.207991684952073
- name: Test CER
type: cer
value: 7.32119307305963
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Catalan Dev Data
type: speech-recognition-community-v2/dev_data
args: ca
metrics:
- name: Test WER
type: wer
value: 22.870153690468661
- name: Test CER
type: cer
value: 13.59039190897598
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ca
metrics:
- name: Test WER
type: wer
value: 15.41
---
# wav2vec2-xls-r-1b-ca-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets.
## Model description
Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model.
## Intended uses & limitations
As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.
## Training and evaluation data
## Training procedure
The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py).
### Training results
Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
# Thanks
Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible. |
classla/wav2vec2-xls-r-parlaspeech-hr | 057825f8249b864ae09809d6e194cdae72673172 | 2022-05-18T14:18:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hr",
"dataset:parlaspeech-hr",
"transformers",
"audio",
"parlaspeech"
] | automatic-speech-recognition | false | classla | null | classla/wav2vec2-xls-r-parlaspeech-hr | 250 | null | transformers | 3,293 | ---
language: hr
datasets:
- parlaspeech-hr
tags:
- audio
- automatic-speech-recognition
- parlaspeech
widget:
- example_title: example 1
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/1800.m4a
- example_title: example 2
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020578b.flac.wav
---
# wav2vec2-xls-r-parlaspeech-hr
This model for Croatian ASR is based on the [facebook/wav2vec2-xls-r-300m model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) and was fine-tuned with 300 hours of recordings and transcripts from the ASR Croatian parliament dataset [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494).
If you use this model, please cite the following paper:
Nikola Ljubešić, Danijel Koržinek, Peter Rupnik, Ivo-Pavao Jazbec. ParlaSpeech-HR -- a freely available ASR dataset for Croatian bootstrapped from the ParlaMint corpus. Accepted at ParlaCLARIN@LREC.
## Metrics
Evaluation is performed on the dev and test portions of the [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494) dataset.
|split|CER|WER|
|---|---|---|
|dev|0.0335|0.1046|
|test|0.0234|0.0761|
## Usage in `transformers`
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
import torch
import os
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained(
"classla/wav2vec2-xls-r-parlaspeech-hr")
model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-xls-r-parlaspeech-hr")
# download the example wav files:
os.system("wget https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020570a.flac.wav")
# read the wav file
speech, sample_rate = sf.read("00020570a.flac.wav")
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.to(device)
# remove the raw wav file
os.system("rm 00020570a.flac.wav")
# retrieve logits
logits = model.to(device)(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0]).lower()
# transcription: 'veliki broj poslovnih subjekata posluje sa minusom velik dio'
```
## Training hyperparameters
In fine-tuning, the following arguments were used:
| arg | value |
|-------------------------------|-------|
| `per_device_train_batch_size` | 16 |
| `gradient_accumulation_steps` | 4 |
| `num_train_epochs` | 8 |
| `learning_rate` | 3e-4 |
| `warmup_steps` | 500 | |
jakelever/coronabert | 69beda2cdffab84c80b1685f83b58432baf4fa78 | 2021-05-19T20:34:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:cord19",
"dataset:pubmed",
"transformers",
"coronavirus",
"covid",
"bionlp",
"license:mit"
] | text-classification | false | jakelever | null | jakelever/coronabert | 250 | 3 | transformers | 3,294 | ---
language: en
thumbnail: https://coronacentral.ai/logo-with-name.png?1
tags:
- coronavirus
- covid
- bionlp
datasets:
- cord19
- pubmed
license: mit
widget:
- text: "Pre-existing T-cell immunity to SARS-CoV-2 in unexposed healthy controls in Ecuador, as detected with a COVID-19 Interferon-Gamma Release Assay."
- text: "Lifestyle and mental health disruptions during COVID-19."
- text: "More than 50 Long-term effects of COVID-19: a systematic review and meta-analysis"
---
# CoronaCentral BERT Model for Topic / Article Type Classification
This is the topic / article type multi-label classification for the [CoronaCentral website](https://coronacentral.ai). This forms part of the pipeline for downloading and processing coronavirus literature described in the [corona-ml repo](https://github.com/jakelever/corona-ml) with available [step-by-step descriptions](https://github.com/jakelever/corona-ml/blob/master/stepByStep.md). The method is described in the [preprint](https://doi.org/10.1101/2020.12.21.423860) and detailed performance results can be found in the [machine learning details](https://github.com/jakelever/corona-ml/blob/master/machineLearningDetails.md) document.
This model was derived by fine-tuning the [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) model on this coronavirus sequence (document) classification task.
## Usage
Below are two Google Colab notebooks with example usage of this sequence classification model using HuggingFace transformers and KTrain.
- [HuggingFace example on Google Colab](https://colab.research.google.com/drive/1cBNgKd4o6FNWwjKXXQQsC_SaX1kOXDa4?usp=sharing)
- [KTrain example on Google Colab](https://colab.research.google.com/drive/1h7oJa2NDjnBEoox0D5vwXrxiCHj3B1kU?usp=sharing)
## Training Data
The model is trained on ~3200 manually-curated articles sampled at various stages during the coronavirus pandemic. The code for training is available in the [category\_prediction](https://github.com/jakelever/corona-ml/tree/master/category_prediction) directory of the main Github Repo. The data is available in the [annotated_documents.json.gz](https://github.com/jakelever/corona-ml/blob/master/category_prediction/annotated_documents.json.gz) file.
## Inputs and Outputs
The model takes in a tokenized title and abstract (combined into a single string and separated by a new line). The outputs are topics and article types, broadly called categories in the pipeline code. The types are listed below. Some others are managed by hand-coded rules described in the [step-by-step descriptions](https://github.com/jakelever/corona-ml/blob/master/stepByStep.md).
### List of Article Types
- Comment/Editorial
- Meta-analysis
- News
- Review
### List of Topics
- Clinical Reports
- Communication
- Contact Tracing
- Diagnostics
- Drug Targets
- Education
- Effect on Medical Specialties
- Forecasting & Modelling
- Health Policy
- Healthcare Workers
- Imaging
- Immunology
- Inequality
- Infection Reports
- Long Haul
- Medical Devices
- Misinformation
- Model Systems & Tools
- Molecular Biology
- Non-human
- Non-medical
- Pediatrics
- Prevalence
- Prevention
- Psychology
- Recommendations
- Risk Factors
- Surveillance
- Therapeutics
- Transmission
- Vaccines
|
mrm8488/GPT-2-finetuned-CORD19 | e01e2b09c682d90866ec8a5bb26c838499359fd5 | 2021-05-23T10:09:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers"
] | text-generation | false | mrm8488 | null | mrm8488/GPT-2-finetuned-CORD19 | 250 | null | transformers | 3,295 | ---
language: en
thumbnail:
---
# GPT-2 + CORD19 dataset : 🦠 ✍ ⚕
**GPT-2** fine-tuned on **biorxiv_medrxiv**, **comm_use_subset** and **custom_license** files from [CORD-19](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge) dataset.
## Datasets details
| Dataset | # Files |
| ---------------------- | ----- |
| biorxiv_medrxiv | 885 |
| comm_use_subset | 9K |
| custom_license | 20.6K |
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
export TRAIN_FILE=/path/to/dataset/train.txt
python run_language_modeling.py \
--model_type gpt2 \
--model_name_or_path gpt2 \
--do_train \
--train_data_file $TRAIN_FILE \
--num_train_epochs 4 \
--output_dir model_output \
--overwrite_output_dir \
--save_steps 10000 \
--per_gpu_train_batch_size 3
```
<img alt="training loss" src="https://svgshare.com/i/JTf.svg' title='GTP-2-finetuned-CORDS19-loss" width="600" height="300" />
## Model in action / Example of usage ✒
You can get the following script [here](https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py)
```bash
python run_generation.py \
--model_type gpt2 \
--model_name_or_path mrm8488/GPT-2-finetuned-CORD19 \
--length 200
```
```txt
# Input: the effects of COVID-19 on the lungs
# Output: === GENERATED SEQUENCE 1 ===
the effects of COVID-19 on the lungs are currently debated (86). The role of this virus in the pathogenesis of pneumonia and lung cancer is still debated. MERS-CoV is also known to cause acute respiratory distress syndrome (87) and is associated with increased expression of pulmonary fibrosis markers (88). Thus, early airway inflammation may play an important role in the pathogenesis of coronavirus pneumonia and may contribute to the severe disease and/or mortality observed in coronavirus patients.
Pneumonia is an acute, often fatal disease characterized by severe edema, leakage of oxygen and bronchiolar inflammation. Viruses include coronaviruses, and the role of oxygen depletion is complicated by lung injury and fibrosis in the lung, in addition to susceptibility to other lung diseases. The progression of the disease may be variable, depending on the lung injury, pathologic role, prognosis, and the immune status of the patient. Inflammatory responses to respiratory viruses cause various pathologies of the respiratory
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
nielsr/coref-bert-base | 2f09793bdaf95ffee5a8f893b75377a6f429da61 | 2021-01-21T10:06:00.000Z | [
"pytorch",
"en",
"dataset:wikipedia",
"dataset:quoref",
"dataset:docred",
"dataset:fever",
"dataset:gap",
"dataset:winograd_wsc",
"dataset:winogender",
"dataset:glue",
"arxiv:2004.06870",
"transformers",
"exbert",
"license:apache-2.0"
] | null | false | nielsr | null | nielsr/coref-bert-base | 250 | 0 | transformers | 3,296 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- wikipedia
- quoref
- docred
- fever
- gap
- winograd_wsc
- winogender
- glue
---
# CorefBERTa base model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in
[this paper](https://arxiv.org/abs/2004.06870) and first released in
[this repository](https://github.com/thunlp/CorefBERT).
Disclaimer: The team releasing CorefBERT did not write a model card for this model so this model card has been written by me.
## Model description
CorefBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the
mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefBERT model as inputs.
### BibTeX entry and citation info
```bibtex
@misc{ye2020coreferential,
title={Coreferential Reasoning Learning for Language Representation},
author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},
year={2020},
eprint={2004.06870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
nitishk/IronStarkBot | d2465aacf90090048ec978fe570e379a29e86033 | 2021-09-02T03:23:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | nitishk | null | nitishk/IronStarkBot | 250 | null | transformers | 3,297 | ---
tags:
- conversational
---
# IronStarkBot
|
asahi417/lmqg-mt5-small-jaquad-multitask | 19ddc66e17c5aaf42d28a543383f63dab3e29a47 | 2022-06-09T10:55:07.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ja",
"dataset:asahi417/qg_jaquad",
"transformers",
"question generation",
"question answer generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-mt5-small-jaquad-multitask | 250 | null | transformers | 3,298 | ---
language: ja
tags:
- question generation
- question answer generation
license: cc-by-4.0
datasets:
- asahi417/qg_jaquad
metrics:
- bleu
- meteor
- rouge
- bertscore
widget:
- text: "generate question: ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。"
example_title: "Question Generation Example 1"
- text: "generate question:『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる<hl>3万5000部<hl>が刷られた。他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。"
example_title: "Question Generation Example 2"
- text: "question generation:フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め<hl>30数点<hl>しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。"
example_title: "Question Generation Example 3"
- text: "generate question:東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて<hl>中国<hl>から起こり、伝来したものであった。当時の宗とは、教団というよりは仏教教理の学派に近い。それゆえ、兼学の場ができたとも言える。この様な兼学の形態は、南都の寺院では広く見られたものである。この六宗兼学の場(後、真言、天台加わって八宗兼学の場)の性格は、現在の東大寺でも見られるが、中でも重んじられたのが、本尊の大仏の性格が華厳経の教えに則ったものであることからも分かるように、華厳宗である。"
example_title: "Question Generation Example 4"
- text: "extract answers:ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。<hl>皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。<hl>視察が予定されている6月28日は2人の14回目の結婚記念日であった。"
example_title: "Answer Extraction Example 1"
- text: "extract answers:『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。<hl>前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる3万5000部が刷られた。<hl>他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。"
example_title: "Answer Extraction Example 2"
- text: "extract answers:フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。<hl>現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。<hl>収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。"
example_title: "Answer Extraction Example 3"
pipeline_tag: text2text-generation
---
# MT5 SMALL fine-tuned for Japanese Question Generation & Answer Extraction
MT5 SMALL Model fine-tuned on Japanese question generation dataset (JaQuAD) with an extensive hyper-parameter search.
This model is fine-tuned on question generation & answer extraction jointly.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** mt5-small
**Language:** Japanese (ja)
**Downstream-task:** Question Generation, Answer Extraction
**Training data:** JaQuAD
**Eval data:** JaQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-mt5-small-jaquad-multitask'
pipe = pipeline("text2text-generation", model_path)
# Question Genration
paragraph = '東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて中国から起こり、伝来したものであった。'
# highlight an answer in the paragraph to generate question
answer = '中国'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
input_text = 'generate question: {}'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': '六宗はどこから始まったの?'}]
# Answer Extraction
paragraph = '東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて中国から起こり、伝来したものであった。当時の宗とは、教団というよりは仏教教理の学派に近い。それゆえ、兼学の場ができたとも言える。'
# highlight a sentence where the answer should be extracted
sentence = '東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて中国から起こり、伝来したものであった。'
input_text = paragraph.replace(sentence, '{0} {1} {0}'.format(highlight_token, sentence))
input_text = 'extract answer: <hl> {} <hl>'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': '中国'}]
```
## Evaluations
Evaluation on the test set of [JaQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_jaquad).
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore |
| ------ | -------- | ------ | --------- |
| 31.91 | 52.57 | 29.63 | 81.64 |
- [metric file](https://huggingface.co/asahi417/lmqg-mt5-small-jaquad-multitask/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_jaquad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-mt5-small-jaquad-multitask/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
adamlin/ConvBERT | 38db046b5c5c6612f3be2db51218a612e3fb1d1b | 2022-06-20T20:48:29.000Z | [
"pytorch",
"bert",
"pretraining",
"transformers"
] | null | false | adamlin | null | adamlin/ConvBERT | 250 | null | transformers | 3,299 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.